report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The 2010 NPR report described the administration’s approach to maintaining the U.S. nuclear deterrent capability while pursuing further reductions in nuclear weapons. The 2010 NPR was the third comprehensive assessment of U.S. nuclear policy and strategy conducted by the United States since the end of the Cold War; previous reviews were completed in 1994 and 2001. The Office of the Secretary of Defense and the Joint Staff led the effort in consultation with the Departments of State and Energy. Other organizations participated, including the military departments, the combatant commands, the Departments of Homeland Security and Treasury, the Office of the Director of National Intelligence, and the National Security Council and its supporting interagency bodies. The 2010 NPR report focused on five objectives: 1. preventing nuclear proliferation and nuclear terrorism; 2. reducing the role of U.S. nuclear weapons in the U.S. national security 3. maintaining strategic deterrence and stability at lower nuclear force 4. strengthening regional deterrence and reassuring U.S. allies and 5. sustaining a safe, secure, and effective nuclear arsenal. The third of these objectives—maintaining strategic deterrence and stability at reduced nuclear force levels—emphasizes the importance of bilateral and verifiable reductions in strategic nuclear weapons in coordination with Russia. In support of this objective, the United States signed a new Strategic Arms Reduction Treaty with Russia—known as New START—on April 8, 2010, which entered into force on February 5, 2011. New START gives Russia and the United States 7 years to reduce their strategic delivery vehicles and strategic nuclear warheads—under the counting rules outlined in the treaty—and will remain in force for 10 years. According to DOD’s April 2014 report on its plan to implement New START, DOD plans to maintain 400 deployed intercontinental ballistic missiles; 240 deployed submarine-launched ballistic missiles; and 60 deployed heavy bombers. The 60 heavy bombers consist of B-52s and B-2s. Taken together, these add up to 700 deployed delivery vehicles and fall within the New START limits that go into force in 2018. DOD and military service officials told us these numbers reflect DOD’s current planned strategic force structure for implementing New START. Figure 1 shows DOD’s planned deployed strategic force structure for implementing New START, including the number of delivery vehicles for each leg of the triad. In 2011, the President directed DOD to conduct a follow-on analysis to the NPR, which reviewed U.S. nuclear deterrence requirements. The review resulted in the development of the President’s nuclear employment guidance and a DOD report on this nuclear employment guidance, which was completed in June 2013. The review was led by DOD and included senior-level participation by the Office of the Secretary of Defense, the Joint Chiefs of Staff, Strategic Command, the Department of State, the Department of Energy, the Office of the Director of National Intelligence, and the National Security Staff (now known as the National Security Council). As indicated in DOD’s 2013 report on the President’s nuclear employment guidance, the review assessed what changes to nuclear employment strategy could best support the five key objectives of the 2010 NPR and a sixth objective: achieve U.S. and allied objectives if deterrence fails. In June 2013, DOD completed a Strategic Choices Management Review, which, according to DOD officials, considered reductions in nuclear forces, among other things. According to the Secretary of Defense, the purpose of the Strategic Choices Management Review was to understand the effect that further budget reductions would have on the department and to develop options to deal with these reductions. Figure 2 shows a timeline of events and reviews related to DOD’s assessment of U.S. nuclear forces from 2010 through 2014. DOD assessed the need for each leg of the strategic triad in support of the 2010 NPR and considered other reductions to nuclear forces in subsequent reviews. The department identified advantages of each leg of the triad and concluded that retaining all three would help maintain strategic deterrence and stability. The 2010 NPR report states that the administration considered various options for U.S. nuclear force structure, including options in which the United States would eliminate one leg of the triad. DOD officials also told us that the department had assessed nuclear force reductions as part of subsequent reviews, including during the development of the President’s nuclear employment guidance, the 2013 Strategic Choices Management Review, and the development of DOD’s plan to implement New START. The 2010 NPR report identified advantages of each leg of the triad that DOD decided warrant retaining all three, even in light of the planned reductions under New START. These advantages—including the survivability of the sea-based leg; the intercontinental ballistic missiles’ contribution to stability; and the ability of the nuclear-capable bombers to visibly forward deploy—are further described in Navy and Air Force acquisition documents completed both before and after the 2010 NPR, from 2008 through 2014. These acquisition documents do not include an assessment of the strategic triad as a whole but help define and clarify the advantages that are identified in the 2010 NPR report. In addition to identifying the advantages of each leg, the 2010 NPR report indicates that retaining all three legs best maintains strategic stability at reasonable cost while reducing risk against potential technical problems or vulnerabilities. The 2010 NPR report states that for the planned reductions under New START, DOD considered force structure options in which the department would eliminate a leg of the triad. DOD officials told us that in senior-level force structure meetings in support of the NPR, DOD and key stakeholders discussed and considered alternatives to a triad for U.S. strategic force structure. DOD officials were unable to provide us documentation of the NPR’s analysis of the strategic force structure options that were considered; officials from the Office of the Secretary of Defense, Joint Staff, and Strategic Command told us that much of the NPR analysis on the consideration of different strategic force structure options was discussed in senior-level meetings and was not documented. In addition to the discussions and analysis of options for alternative strategic force structures that occurred during the development of the 2010 NPR, Strategic Command, Air Force, and Navy officials told us that they had also analyzed alternative strategic force structures in advance of the NPR discussions. We reviewed examples of Air Force and Strategic Command analyses and reported on these in our classified report. DOD’s 2013 unclassified report on the President’s nuclear employment guidance states that DOD also assessed potential reductions in U.S. nuclear forces in the follow-on review to the NPR that led to the development of the 2013 Presidential nuclear employment guidance. The report says that, in that review, the President determined that the United States can safely pursue up to a one-third reduction in deployed nuclear weapons from the level established in New START, while still ensuring the security of the United States and U.S. allies and partners and maintaining a strong and credible strategic deterrent. DOD officials told us that to avoid large disparities in nuclear capabilities, the report also stated the administration’s intent to seek negotiated cuts with Russia. However, such negotiations have not yet begun as of August 2016. DOD officials told us that, in the June 2013 Strategic Choices Management Review—which supported the department’s budget review—the department considered cutting nuclear forces and capabilities. The purpose of the Strategic Choices Management Review was to examine the potential effect of additional anticipated budget reductions on the department and generally review how DOD would allocate resources when executing its fiscal year 2014 budget and preparing its fiscal years 2014 through 2019 budget plans. According to DOD officials, the administration and the department ultimately decided against the options to reduce nuclear forces that were considered in the 2013 Strategic Choices Management Review. As we have previously reported, DOD considered alternatives to its strategic force structure in senior-level meetings for implementing New START. According to DOD officials, in these senior-level meetings— which were organized by the Joint Staff and led by the Office of the Under Secretary of Defense for Policy—DOD finalized its recommendations to the National Security Council for the strategic force structure to implement the treaty. DOD officials told us that, during these meetings, DOD participants considered options to comply with the treaty. They also told us that DOD ultimately recommended maintaining 400 deployed intercontinental ballistic missiles, 240 deployed submarine-launched ballistic missiles, 60 deployed heavy bombers, 54 nondeployed intercontinental ballistic missile silos, 40 nondeployed submarine- launched ballistic missile launch tubes, and 6 nondeployed nuclear- capable heavy bombers. According to officials, the National Security Council approved this recommendation, which is reflected in DOD’s April 2014 report on its plan to implement New START. We provided a draft of the classified version of this report to DOD for review and comment. In response to that draft report, DOD provided technical comments that we have incorporated as appropriate. We are providing copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Air Force, the Secretary of the Navy, the Secretary of the Army, the Joint Staff, and the Under Secretary of Defense for Policy. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9971 or kirschbaumj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in the appendix. Joseph W. Kirschbaum, (202) 512-9971 or kirschbaumj@gao.gov. In addition to the contact named above, Penney Harwell Caramia (Assistant Director), Scott Fletcher, Jonathan Gill, Joanne Landesman, Amie Lesser, Brian Mazanec, Timothy Persons, Steven Putansu, Michael Shaughnessy, and Sam Wilson made key contributions to this report.
Since the 1960s, the United States has deployed nuclear weapons on three types of strategic delivery vehicles collectively known as the strategic triad. The triad comprises the sea-based leg (submarine-launched ballistic missiles), ground-based leg (intercontinental ballistic missiles), and airborne leg (nuclear-capable heavy bombers). As a result of arms control agreements and strategic policies, the number of U.S. nuclear weapons and strategic delivery vehicles has been reduced substantially; however, the strategic triad has remained intact. DOD and the Department of Energy are planning to invest significant resources to recapitalize and modernize the strategic triad in the coming decades. The departments projected in 2015 that the costs of maintaining U.S. nuclear forces for fiscal years 2016 through 2025 would total $319.8 billion, and DOD expects recapitalization and modernization efforts to extend into the 2030s. GAO was asked to review DOD's analysis of the decision to retain all three legs of the strategic triad. This report describes the processes DOD used in supporting that decision. GAO reviewed documentation and interviewed officials from DOD and the military services on the key reviews DOD carried out from 2009 to 2014— including the 2010 Nuclear Posture Review—in analyzing its strategic force structure. The Department of Defense (DOD) assessed the need for each leg of the strategic triad in support of the 2010 Nuclear Posture Review and considered other reductions to nuclear forces in subsequent reviews. The department identified advantages of each leg of the triad and concluded that retaining all three would help maintain strategic deterrence and stability. The advantages DOD identified include the survivability of the sea-based leg, the intercontinental ballistic missiles' contribution to stability, and the ability of the nuclear-capable bombers to visibly forward deploy. The 2010 Nuclear Posture Review Report states—and DOD officials also told GAO—that the administration has considered various options for the U.S. nuclear force structure, including options in which DOD would eliminate one leg of the triad. For example, Strategic Command, Air Force, and Navy officials told GAO that they had analyzed alternative strategic force structures in preparation for the 2010 Nuclear Posture Review. DOD officials also told GAO that the department had assessed nuclear force reductions as part of reviews conducted after the Nuclear Posture Review, including during the development of the President's 2013 nuclear employment guidance, the 2013 Strategic Choices Management Review, and DOD's 2014 plan to implement the New Strategic Arms Reduction Treaty (New START) with Russia. The figure shows DOD's current planned strategic force structure for implementing New START, including the number of delivery vehicles that would be retained for each leg of the triad. This is a public version of a classified report GAO issued in May 2016. It excludes classified information on warhead levels, the specific advantages of each leg of the triad, and some of the analyses of alternatives that were considered. GAO is not making any recommendations in this report. DOD provided technical comments, which were incorporated as appropriate.
The demands upon judges’ time are largely a function of both the number and complexity of the cases on their dockets. Some types of cases may demand relatively little time, and others may require many hours of work. To measure the case-related workload of district court judges, the Judicial Conference has adopted weighted case filings. The purpose of the district court case weights was to create a measure of the average judge time that a specific number and mix of cases filed in a district court would require. Importantly, the weights were designed to be descriptive not prescriptive—that is, the weights were designed to develop a measure of the national average amount of time that judges actually spent on specific types of cases, not to develop a measure of how much time judges should spend on specific types of cases. Moreover, the weights were designed to measure only case-related judge workload. Judges have noncase-related duties and responsibilities, such as administrative tasks, that are not reflected in the case weights. With few exceptions, such as cases that are remanded to a district court from the courts of appeals, each civil and criminal case filed in a district court is assigned a case weight. Each case filed in a district court is assigned a case weight based on the subject matter of the case. The weight of the overall average case is 1.0. All other weights were established relative to this national average case. Thus, a case with a weight of 0.5 would be expected to require on average about half as much judge time as the national average case, and a case with a value of 2.0 would be expected to require on average about twice as much judge time as the national average case. Case weights for criminal felony defendants are applied on a per defendant basis. For example, the case weight for heroin/cocaine distribution is 2.27. If such a case involved two defendants, the court would be credited with a weight of 4.54—two times the assigned case weight of 2.27. Of course, the actual amount of time a judge may spend on any specific case may be more or less than the national average for that type of case. Total weighted filings for a district are determined by summing the case weights associated with all the cases filed in the district during the year. Weighted case filings per authorized judgeship—is the total annual weighted filings divided by the total number of authorized judgeships for the district. For example, if a district had total weighted filings of 4,600 and 10 authorized judgeships, its weighted filings per authorized judgeship would be 460. The Judicial Conference uses weighted filings of 430 or more per authorized judgeship as an indication that a district may need one or more additional judgeships. Thus, a district with 460 weighted filings per authorized judgeship could be considered for an additional judgeship. The Judicial Conference approved the use of the current district court case weights in 1993. The weights are based on a “case-tracking time study,” conducted between 1987 and 1993, in which judges recorded the amount of time spent on each of their cases included in the study. The study included about 8,100 civil cases and about 4,200 criminal cases. Overall, the weighted case filings, as approved in 1993, are a reasonably accurate method of measuring the average judge time that a specific number and mix of cases filed in a district court would require. The methodology used to develop the case weights was reasonable. It used a valid sampling procedure, developed weights based on actual case-related time recorded by judges from case filing to disposition, and included a measure (standard errors) of the statistical confidence in the final weight for each weighted case type. The case weights are almost 10 years old, and the time data on which they were based are as much as 15 years old. Changes since the case weights were finalized in 1993, such as changes in the characteristics of cases filed in federal district courts and in case management practices, may affect how accurately the weights continue to reflect the time burden on district court judges today. For example, since 1993, new civil causes of action (such as telemarketing issues) and criminal offenses (such as new terrorism offenses) needed to be accommodated within the existing case- weight structure. According to FJC officials, where the new cause of action or criminal offense is similar to an existing case-weight type, the weight for the closest case type is assigned. Where the new cause of action or criminal offense is clearly different from any existing case-weight category, the weight assigned is that for either “all other” civil cases or “all other” criminal cases. The Subcommittee on Judicial Statistics of the Judicial Conference’s Committee on Judicial Resources has approved the research design for revising the current case weights, with the goal of having new weights submitted to the Resources Committee for review in the summer of 2004. The design for the new case weights relies on three sources of data for specific types of cases: (1) data from automated databases identifying the docketed events associated with cases; (2) data from automated sources on the time associated with courtroom events for cases, such as trials or hearings; and (3) estimated time data from structured, guided discussion among experienced judges on the time associated with noncourtroom events for cases, such as reading briefs or writing opinions. Although the proposed methodology appears to offer the benefit of reduced judicial burden (no time study data collection), potential cost savings, and reduced calendar time to develop the new weights, we have two principal concerns about the research design—the challenge of obtaining reliable, comparable data from two different automated data systems for the analysis and the limited collection of actual data on the time judges spend on cases. The design assumes that judicial time spent on a given case can be accurately estimated by viewing the case as a set of individual tasks or events in the case. Information about event frequencies and, where available, time spent on the events would be extracted from existing administrative data bases and reports and then used to develop estimates of the judge-time spent on different types of cases. For event data, the research design proposes using data from new technology (the Case Management/Electronic Case Filing System) that is currently being introduced into the court system for recording case management information. However, not all courts have implemented the new system, and data from the existing and new systems will have to be integrated to obtain and analyze the event data. FJC researchers, who would conduct the research, recognize the challenges this poses and have developed a strategy for addressing the issues, which includes forming a technical advisory group from FJC, the Administrative Office of the U.S. Courts, and individual courts to develop a method of reliably extracting and integrating data from the two case management systems for analysis. Second, the research design does not require judges to record time spent on individual cases. Actual time data would be limited to that available from existing reports on the time associated with courtroom events and proceedings for different types of cases. However, a majority of district judges’ time is spent on case-related work outside the courtroom. The time required for noncourtroom events would be derived from structured, guided discussions of groups of 8 to 13 experienced district court judges in each of the 12 geographic circuits (about 100 judges in all). The judges would develop estimates of the time required for different events in different types of cases within each circuit, using FJC-developed “default values” as the reference point for developing their estimates. These default values would be based in part on the existing case weights and in part on other types of analyses. Following the meetings of the judges in each circuit, a national group of 24 judges (2 from each circuit) would consider the data from the 12 circuit groups and develop the new weights. The accuracy of judges’ time estimates is dependent upon the experience and knowledge of the participating judges and the accuracy and reliability of the judges’ recall about the time required for different events in different types of cases—about 150 if all the case types in the current case weights were used. These consensus data cannot be used to calculate statistical measures of the accuracy of the resulting case weights. Thus, it will not be possible to objectively, statistically assess how accurate the new case weights are—weights on whose accuracy the Judicial Conference will rely in assessing judgeship needs in the future. A time study conducted concurrently with the proposed research methodology would be advisable to identify potential shortcomings of the event-based methodology and to assess the relative accuracy of the case weights produced using that methodology. In the absence of a concurrent time study, there would be no objective statistical way to determine the accuracy of the case weights produced by the proposed event-based methodology. The principal workload measure that the Judicial Conference uses to assess the need for additional courts of appeals judges is adjusted case filings. We found that adjusted case filings are based on data available from standard statistical reports for the courts of appeals. The measure is not based on any empirical data about the judge time required by different types of cases in the courts of appeals. The Judicial Conference’s policy is that courts of appeals with adjusted case filings of 500 or more per three-judge panel may be considered for one or more additional judgeships. Courts of appeals generally decide cases using constantly rotating three-judge panels. Thus, if a court had 12 authorized judgeships, those judges could be assigned to four panels of three judges each. In assessing judgeship needs for the courts of appeals, the Conference may also consider factors other than adjusted case filings, such as the geography of the circuit or the median time from case filings to dispositions. Adjusted case filings are used for 11 of the 12 courts of appeals. It is not used for the Court of Appeals for the D.C. Circuit. A FJC study of that court’s workload determined that adjusted case filings were not an appropriate means of measuring the court’s judgeship needs. The court had a high proportion of administrative agency appeals that occurred almost exclusively in the Court of Appeals for D.C. and were more burdensome than other types of cases in several respects—e.g., more independently represented participants per case, more briefs filed per case, and a higher rate of case consolidation. Essentially, the adjusted case filings workload measure counts all case filings equally, with two exceptions. First, cases refiled and approved for reinstatement are excluded from total case filings. Second, two-thirds of pro se cases—defined by the Administrative Office as cases in which one or both of the parties are not represented by an attorney—are deducted from total case filings (that is, they are effectively weighted at 0.33). For example, a court with 600 total pro se filings in a fiscal year would be credited with 198 adjusted pro se case filings (600 x 0.33). The remaining nonpro se cases would be weighted at 1.0 each. Thus, a court of appeals with 1,600 case filings (excluding reinstatements)—600 pro se cases and 1,000 nonpro se cases—would be credited with 1,198 adjusted case filings (198 discounted pro se cases plus 1,000 nonpro se cases). If this court had 6 judges (allow two panels of 3 judges each), it would have 599 adjusted case filings per 3-judge panel, and, thus, under Judicial Conference policy, could be considered for an additional judgeship. The current court of appeals workload measure represents an effort to improve the previous measure. In our 1993 report on judgeship needs assessment, we noted that the restraint of individual courts of appeals, not the workload standard, seemed to have determined the actual number of appellate judgeships the Judicial Conference requested. At the time the current measure was developed and approved, using the new benchmark of 500 adjusted case filings resulted in judgeship numbers that closely approximated the judgeship needs of the majority of the courts of appeals, as the judges of each court perceived them. The current courts of appeals case-related workload measure principally reflects a policy decision using historical data on filings and terminations. It is not based on empirical data regarding the judge time that different types of cases may require. On the basis of the documentation we reviewed, we determined that there is no empirical basis for assessing the potential accuracy of adjusted filings a measure of case-related judge workload. In our report, we recommended that the Judicial Conference of the United States update the district court case weights using a methodology that supports an objective, statistically reliable means of calculating the accuracy of the resulting weights; and develop a methodology for measuring the case-related workload of courts of appeals judges that supports an objective, statistically reliable means of calculating the accuracy of the resulting workload measure(s) and that addresses the special case characteristics of the Court of Appeals for the D.C. Circuit. In a May 27, 2003, letter to GAO, the Chair of the Committee on Judicial Resources said that the development of the new case weights will use substantial data already collected and that our report did not reflect the sophisticated methodology the FJC had designed for the study nor acknowledge the substantial increased costs and time involved in a time study that was likely to offer little or no added value for the investment. The letter also noted that the workloads of the courts of appeals entail important factors that have defied measurement, including the significant differences in the courts’ case processing techniques. The Deputy Director of FJC, in a May 27, 2003, letter agreed that the estimated data on noncourtroom judge time in the new study would not permit the calculation of standard errors. However, the integrity of the resulting case-weight system could still be evaluated on the basis of adherence to the procedures that will be used to gather the data and promote their reliability. We believe that our analysis and recommendations are sound and that the importance and costs of creating new Article III federal judgeships requires the best possible case-related workload data to support the assessment of the need for more judgeships. That concludes my statement, Mr. Chairman, and I would be pleased to answer any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact William O. Jenkins, Jr., at (202) 512-8777. Individuals making key contributions to this testimony included David Alexander, Kriti Bhandari, R. Rochelle Burns, and Chris Moriarity. Whether the district court case weights are a reasonably accurate measure of district judge case-related workload is dependent upon two variables: (1) the accuracy of the case weights themselves and (2) the accuracy of classifying cases filed in district courts by the case type used for the case weights. If case filings are inaccurately identified by case type, then the weights are inaccurately calculated. Because there are fewer categories used in the courts of appeals workload measure, there is greater margin for error. The database for the courts of appeals should accurately identify (1) pro se cases, (2) reinstated cases, and (3) all cases not in the first two categories. All current records related to civil and criminal filings that are reported to the Administrative Office of the U.S. Courts (AOUSC) and used for the district court case weights are generated by the automated case management systems in the district courts. Filings records are generated monthly and transmitted to AOUSC for inclusion in its national database. On a quarterly basis, AOUSC summarizes and compiles the records into published tables, and for given periods these tables serve as the basis for the weighted caseload determinations. In responses to written questions, AOUSC described numerous steps taken to ensure the accuracy and completeness of the filings data, including the following: Built-in, automated quality control edits are done when data are entered electronically at the court level. The edits are intended to ensure that obvious errors are not entered into a local court’s database. Examples of the types of errors screened for are the district office in which the case was filed, the U.S. Code title and section of the filing, and the judge code. Most district courts have staff responsible for data quality control. A second set of automated quality control edits are used by AOUSC when transferring data from the court level to its national database. These edits screen for missing or invalid codes that are not screened for at the court level, such as dates of case events, the type of proceeding, and the type of case. Records that fail one or more checks are not added to the national database and are returned electronically to the originating court for correction and resubmission. Monthly listings of all records added to the national database are sent electronically to the involved courts for verification.
GAO appeared before the Subcommittee on Courts, the Internet, and Intellectual Property, House Committee on the Judiciary to discuss the results of our review and assessment of case-related workload measures for district court and courts of appeals judges. Biennially, the Judicial Conference of the United States, the federal judiciary's principal policymaking body, assesses the judiciary's needs for additional judgeships. If the Conference determines that additional judgeships are needed, it transmits a request to Congress identifying the number, type (courts of appeals, district, or bankruptcy), and location of the judgeships it is requesting. In assessing the need for additional district and appellate court judgeships, the Judicial Conference considers a variety of information, including responses to its biennial survey of individual courts, temporary increases or decreases in case filings, and other factors specific to an individual court. However, the Conference's analysis begins with the quantitative case-related workload measures it has adopted for the district courts and courts of appeals--weighted case filings and adjusted case filings, respectively. These two measures recognize, to different degrees, that the time demands on judges are largely a function of both the number and complexity of the cases on their dockets. Some types of cases may demand relatively little time and others may require many hours of work. GAO found that the district court weighted case filings, as approved in 1993, appear to be a reasonably accurate measure of the average time demands that a specific number and mix of cases filed in a district court could be expected to place on the district judges in that district. The methodology used to develop the case weights was based on a valid sampling procedure, developed weights based on actual case-related time recorded by judges from case filing to disposition, and included a measure (standard errors) of the statistical confidence in the final weight for each weighted case type. The case weights, however, are about 10 years old, and the data on which the weights are based are as much as 15 years old. Changes since 1993, such as the characteristics of cases filed in federal district courts and changes in case management practices, may have affected whether the 1993 case weights continue to be a reasonably accurate measure of the average time burden on district court judges resulting from a specific volume and mix of cases. The Judicial Conference's Subcommittee on Judicial Statistics has approved a research design for updating the current case weights, and we have some concerns about that design. The design would include limited data on the time judges actually spend on specific types of cases. The proposed design would not include collecting actual data on the non-courtroom time that judges spend on different types of cases. Estimates of the non-courtroom time required for specific types of cases would be based on estimates derived from the structured, guided discussions of about 100 experienced judges meeting in 12 separate groups (one for each geographic circuit). The accuracy of case weights developed on such consensus data cannot be assessed using standard statistical methods, such as the calculation of standard errors. Thus, it would not be possible to objectively, statistically assess how accurate the new case weights are. Adjusted case filings, the principal quantitative measure used to assess the case-related workload of courts of appeals judges, are based on available data from standard statistical reports from the courts of appeals. The measure is not based on any empirical data about the judge time required by different types of cases in the courts of appeals. The measure essentially assumes that all cases filed in the courts of appeals, with the exception of pro se cases--those in which one or both parties are not represented by an attorney--require the same amount of judge time. On the basis of the documentation we reviewed, there is no empirical basis on which to assess the accuracy of adjusted filings as a measure of case-related workload for courts of appeals judges. Whether the district court case weights are a reasonably accurate measure of district judge case-related workload is dependent upon two variables: (1) the accuracy of the case weights themselves and (2) the accuracy of classifying cases filed in district courts by the case type used for the case weights. If case filings are inaccurately identified by case type, then the weights are inaccurately calculated.
Conserving the nation’s natural and cultural resources and ensuring visitor enjoyment of these resources has been the primary mission of the National Park Service since its inception in 1916. The Park Service has long provided facilities for visitor use, but over time, the way that the Park Service has provided services has changed. In the 1920s and 1930s, the Park Service—building on the legacy of the railroad companies, who had built the great lodges in western natural parks such as Yellowstone in Wyoming, Glacier in Montana, and the Grand Canyon in Arizona—built basic infrastructure such as roads, wayside stops, administrative offices, campgrounds, and other basic visitor facilities, which were located in different buildings typically arranged as a village. From the 1950s through the 1970s, the Park Service centralized visitor services and adopted modern architecture with large, open spaces that allowed the increasing numbers of visitors to circulate more easily. The Park Service built many visitor centers in preparation for its 50th anniversary in 1966 and built another set of visitor centers in preparation for 1976, the nation’s bicentennial year. The centers built during this time are referred to either as Mission 66 buildings or Bicentennial buildings. Figures 1 and 2 show examples of each. The building commonly thought of as a “visitor center” was created by the Park Service in the mid-1950s. Through a program called Mission 66, the Park Service invested over $600 million in park infrastructure in an effort to handle increasing numbers of visitors. In addition to roads, bridges, and offices, the program resulted in the construction of 111 visitor centers. These visitor centers, for the first time, grouped park interpretive presentations, auditoriums, administrative offices, restrooms, and various other services into a single building. According to the Park Service, the visitor center quickly became one of the most important facilities for helping the public see and enjoy a park, and continues today to be the center of park planning and building. In fiscal year 2001, the Park Service received about $160 million for its construction program to renovate and build new facilities, including visitor centers. Other types of facilities included in the construction program are maintenance buildings, warehouses, utilities, and seawalls and other retaining walls. To construct a major project, such as a visitor center, about 5 to 6 years before construction begins, a park generally identifies the project scope, or needs, and a cost estimate. If the project is to receive appropriated funds, the project is ranked, along with other projects, by a service-wide assessment team and is placed on a 5-year construction program list, which serves as the basis for the Park Service’s annual budget proposals that are reviewed by the Congress. If the project is not to be funded through the annual appropriations process, it receives funds according to the program under which it is being built. For example, projects built with fee demonstration funds will receive funds from regional fee demonstration accounts. Design (including pre-design activities) for all construction projects generally begins 3 years prior to construction and includes the development of increasingly detailed designs and increasingly specific cost estimates for the project. The process includes analysis of different alternatives for the project and the “life-cycle” cost of the alternatives, or the costs of each alternative over its useful life. The Park Service generally contracts with an architecture and engineering firm to complete construction documents for a project, and when these documents are complete, the Park Service contracts for construction with qualified private construction companies. During construction, the Park Service typically contracts for a firm to inspect the construction site and the construction progress. For the 10-year period from fiscal years 1996 through 2005, the Park Service estimates that it has 80 projects that involve construction, renovation, or remodeling of visitor centers. Of these 80 projects, 16 have already been completed, 15 are under construction, and 49 are being planned. The projects under construction and planned may be delayed or cancelled because of funding and scheduling uncertainties. Park officials gave several reasons for the 80 visitor center projects, including the need to replace obsolete or deficient facilities or exhibits, increase space, and address increasing visitation. Of the 80 projects, 43 involve the construction of a new visitor center building, while 37 others require the renovation of an existing building. The Park Service identified 53 priority construction projects, and the Congress identified an additional 27 projects as priority projects. The Park Service has completed or started over one-third of the 80 visitor center projects, and the remaining two-thirds are being planned and construction is expected to be completed in the next 4 years. Figure 3 shows the status of the 80 visitor center projects. Of the 80 visitor center projects, 16 have been completed and 15 are still under construction. The remaining 49 visitor center projects, which were being planned as of April 2001, are expected to be completed by fiscal year 2005. Park projects that are being planned are in various stages of planning, ranging from those that are being conceptualized to those for which construction documents are being developed. For example, the concept for the visitor center project at Denali National Park in Alaska has been selected and the project is in the process of being designed. On the other hand, Badlands National Park in South Dakota has construction documents for its visitor center project and is awaiting a construction contract. Of the 49 projects being planned, some are further along in the planning process than others, and thus have more precise cost estimates. The Park Service develops project designs and cost estimates at three points in the planning process. Twenty-eight of the 49 planned projects have a Class C estimate, which is the least exact design and cost estimate produced. It is based on the costs of similar buildings already constructed and is produced by the park when a project is first considered and requested. Thirteen of the 49 projects have a class B estimate, which is developed after a period of conceptual planning and development of a more detailed plan of the building. The remaining eight planned projects have a class A estimate, which is the final, most precise planning cost estimate that has been developed from construction documents. Parks identified several reasons why a visitor center project was needed. The major reasons given by park officials for building visitor center projects were to replace obsolete facilities or exhibits, to increase space, to handle increased visitation, to build a park’s first visitor center, or to replace a visitor center that was not at an accessible location. One major reason that parks said they needed a new or renovated visitor center is that either existing facilities were obsolete, their exhibits were outdated, or both. According to Park Service staff, visitor centers and other facilities are expected to last 40 to 50 years without major renovation or replacement. Before that time, however, certain functions in the building such as restrooms may need to be updated, and as the building ages, the maintenance and operation costs can become more expensive. The Park Service renovates buildings to prolong their lifespan, but at some point, analyzes whether to remodel and continue using the same building or to build a new one. Several parks, including Bryce Canyon in Utah, Cape Cod National Seashore in Massachusetts, Zion National Park in Utah, and Grand Canyon National Park, have buildings that have aged and need either extensive renovation or replacement. While buildings may last decades, park exhibits contain information that can become outdated, such as scientific information about natural or cultural resources, or contain items that need special protection, such as artifacts or historical documents. With increasing knowledge and technology, park exhibits can be improved to enhance the visitor experience. For example, both Manassas National Battlefield Park in Virginia and Kennesaw Mountain National Battlefield Park in Georgia have renovated their visitor centers in part to upgrade their exhibits. Each of the park’s Civil War era artifacts are now housed in temperature-controlled cases with controlled lighting, both of which required upgraded utilities and connections. Figure 4 shows the addition to the Kennesaw Mountain visitor center. The parks also identified the need for increased space as another major reason for requesting a new visitor center. Park officials stated that the size of a park’s staff and the number of visitors have increased since many of the visitor centers were built, requiring additional space to accommodate increased numbers of people. In addition, park officials identified the need to increase the space used to store collections or provide exhibits. Existing visitor centers ranged in size from 181 square feet to more than 79,000 square feet. The visitor center at Pinnacles National Monument in California—the visitor center with 181 square feet—shares space with another facility and has no room for exhibits. The new planned visitor center will be 1,500 square feet. In contrast, the current visitor center at Gettysburg has an area of 79,274 square feet, including a building that houses the famous “Cyclorama” painting (a circular painting). According to the park’s superintendent, the current visitor center has no room to house the park’s collection of Civil War items, nor the space to store them under appropriate climatic conditions. The new Gettysburg visitor center being planned will be 118,100 square feet. A third major reason that parks gave for needing a new or renovated visitor center is increased visitation. For many parks, visitation has increased greatly since the visitor center first opened. Park officials project that visitation to their visitor centers will continue to increase for a variety of reasons, including the fact that the visitor center will be new, that the park is well-located, or that the long-term trend in visitation has been increasing. Of 53 parks that provided complete data, 51 expected visitation at their visitor centers to increase an average of about 25 percent by 2005, with three-quarters of the parks reporting an increase of about 10 to almost 100 percent. For example, Everglades National Park in Florida expects visitation at its new center to increase from 194,000 in 2000 to 243,000 visitors in 2005. Finally, several parks requested a new visitor center because they either had no visitor center or the existing center location was determined to have a negative effect on natural or cultural resources or was situated in a location that was not accessible to visitors. For example, Grand Portage National Monument in Minnesota—which was created in 1951—has never had a visitor center and instead has offered visitor services out of its administrative building. On the other hand, the visitor center for Palo Alto Battlefield National Historic Site in Texas is currently located in leased facilities eight miles from the park. According to the park’s superintendent, the center is difficult to find and is closed on weekends— because of the hours of the building in which it leases space—the time when most visitors come to the park. The new visitor center, which will be located near the park entrance, will be more accessible and convenient for the park visitors. Of the 80 visitor center projects to be completed by fiscal year 2005, 43 (54 percent) involve construction of a new building and 37 (46 percent) require the renovation of an existing visitor center or building. Individual parks reach the decision to construct a new building or to renovate an existing building during the initial development of the scope of the visitor center project. As park officials plan a visitor center project, they analyze the value of each alternative—a process called a value analysis—before making the decision whether to renovate an existing visitor center or other building or construct a new building. Park officials consider factors, such as the existing building’s age and condition, visitation, maintenance costs over the life of the alternative buildings, and historic significance. The parks also consider whether the visitor center needs to be moved away from a flood plain or the key natural or historic features of the park to prevent damage. For example, the project at Ulysses S. Grant Historical Site in Missouri will build a new permanent visitor center to replace the temporary facilities that are already located in a historic barn in a flood plain. On the other hand, Bryce Canyon decided to renovate its existing visitor center building because there was no other location in the park where a visitor center could be built without further endangering its protected prairie dog population—a valued resource. Figure 5 shows the renovation of the Bryce Canyon National Park visitor center in December 2000, as it was under construction. In special cases, when a building has historic significance, the Park Service—because of its conservation mission and mandate not to impair park resources—must consider not only whether the building should be kept and maintained, but also how to rehabilitate and restore it. For example, the visitor center at Dinosaur National Monument in Utah, and one of three visitor centers at Rocky Mountain National Park, in Colorado, have both been designated National Historic Landmarks because of their architectural significance and association with the Mission 66 period. These visitor centers will be renovated to restore and maintain the buildings’ original conditions, as well as to improve their usefulness as visitor centers. In addition to projects that the Park Service identifies, the Congress can also identify—through legislation or through the appropriations process— projects for construction. Of the 80 visitor center projects, the Park Service requested 53 projects, or 66 percent, while the Congress concurred with these projects and requested an additional 27 projects, or 34 percent. In its annual budget request, the Park Service provides the Congress with a list of proposed construction projects for the upcoming fiscal year. As part of its review of the budget, the Congress may make revisions or additions to this list on the basis of its priorities. Congressional committees, and in some cases individual members, identify projects for construction that are not listed in the annual budget request. In some cases, projects identified by Congress are on the Park Service’s 5-year list of projects to build, but they may not have been included in a particular fiscal year budget request. Park Service officials said that they work with congressional committees and members when the projects are added to the budget to get them ready for planning and construction. For example, in 1996, the Congress passed legislation authorizing the construction of a visitor center to interpret the battle of Corinth in Tennessee and other regional Civil War actions; since that time the Park Service has been planning the facility. The National Park Service estimates that a total of $542 million will be needed for the 80 visitor center projects. The cost of the individual visitor center projects varies widely, ranging from $500,000 to $39 million. In general, a new building with an increased number of functions and additional square footage costs more than a renovated building with fewer functions and less area. For example, the visitor center project at Great Smoky Mountains National Park cost $500,000 and involved the renovation of the existing visitor center building and the addition of an auditorium, which increased the total size of the building by 3,500 square feet to a total of 13,000 square feet. In contrast, the new 118,100 square foot visitor center building planned for Gettysburg National Military Park will contain the five basic functions and many others for an estimated cost of $39 million. The additional functions being planned for this private-park partnership project include a museum, an area for the historic cyclorama painting, restoration of the painting, the removal of the existing visitor center, and rehabilitation of the land where the existing visitor center stands. The number and type of functions and the size of the buildings varies widely because the functions and size of visitor center projects not only depend on the needs of the individual parks, but also the Park Service has no guidelines for what each visitor center project should include. Recognizing the need for such guidelines, the Park Service has contracted with two architecture and engineering firms to develop functions and square footage guidelines for key facilities including visitor centers. The Park Service plans to use these in its development and review of visitor center projects. As of April 2001, the average cost to build a visitor center project was $6.7 million, with the costs ranging from $500,000 to $39 million. Table 1 shows the range of costs of the 80 visitor center projects, the number and percentage of visitor center projects by cost range, and the share of total costs represented by each cost range. When complete, 28 visitor center projects, or about 35 percent of the total projects, will likely cost less than $3 million each. For example, the visitor center at Big Thicket National Preserve in Texas, which is estimated to cost $1.4 million to build, includes the five basic functions and offers an auditorium, ticket and permit area, and a parking lot. Combined, these 28 projects are expected to cost an estimated $53 million, or about 10 percent of the estimated costs for all 80 projects. On the other hand, 15 visitor center projects, which represent about 19 percent of the total projects, are estimated to cost $266 million, or 49 percent of the estimated costs for all 80 visitor center projects. Each of these projects is estimated to cost more than $10 million. They include projects such as the Home of Franklin D. Roosevelt National Historic Site in New York, which will rehabilitate part of the library and build a conference center and a Park Service visitor center in cooperation with the National Archives for an estimated cost of $18 million, and Brown v. Board of Education National Historic Site in Kansas, which will build a new visitor center for an estimated $11.5 million. Other planned projects are estimated to cost more than $20 million each, including Gettysburg and Independence. Some projects that have already been completed or almost completed for more than $10 million include the Grand Canyon, Zion, and Fort Sumter National Monument, which is in South Carolina. Appendix III lists the total project costs for each project with a visitor center. Visitor center project costs vary depending on whether the projects require new construction or renovation of existing visitor centers, the number and type of functions included in the visitor center building, and the size of the building. Almost half of the 80 visitor center projects involve renovation while the remainder involve the construction of new visitor center buildings, which are generally more expensive. Table 2 compares the average costs of renovation and new construction and the cost ranges for each. On average, projects that involve new construction cost twice as much as projects that involve renovation. According to Park Service officials, construction of new buildings involves more work, including preparing the building site and foundation, hooking up utilities, and construction. Renovations may not involve as much work and are generally less expensive. Some renovations can be costly, however, particularly if they involve historical rehabilitation of a building or if they involve a large building with multiple functions. Of the 80 projects, at least 6 involve rehabilitation of historic buildings or adaptation of buildings for use as visitor centers. For example, the visitor center at Dinosaur National Monument has been designated a National Historic Landmark for its architectural significance and association with the Mission 66 period. The project, which will cost an estimated $7.7 million, will correct foundation weaknesses to protect the visitor center from collapsing and will create a larger area inside when the museum collections are moved to a new curatorial building. Another project, which involves restoration of the Kelso Depot at Mojave National Preserve in California, will cost $6 million to preserve one of two remaining train stations built in the 1920s for use as a visitor center. The cost of a visitor center project also varies according to the number and type of functions each includes. The number and types of functions a visitor center project has depends on the individual needs of a park, and can include parking lots, transportation facilities, landscaping, headquarters space, maintenance space, and rehabilitation of areas where existing visitor centers are demolished. With few exceptions, the 80 visitor center projects included the five basic functions of a visitor center— information, exhibits, publication sales, restrooms, and administrative space for center personnel. In addition, several parks identified a number of additional functions, such as auditoriums, curatorial areas, and transportation facilities, to be included in visitor center projects that had a direct bearing on the cost of the projects. Table 3 shows the average number of functions for the 80 visitor center projects by cost range. The five basic functions are not included, as nearly all visitor center projects contain them. The 14 visitor center projects with cost projections below $2 million have an average of 2 additional functions over the 5 basic functions, whereas the 15 visitor center projects with cost projections above $10 million average 6 additional functions or triple the number of additional functions included in the projects costing less than $2 million. The type of function included in the project also affects a project’s costs. Several parks have included transportation facilities in their projects, which can be costly. For example, the Grand Canyon and Zion national parks each have a form of bus service with shuttle stops, buses, and related maintenance buildings. At Zion National Park, the new visitor center project cost about $24 million, and includes the construction of the visitor center, a bus maintenance center, shuttle stops, and the purchase of over 30 buses for the park’s new shuttle system. Figure 6 shows several different parts of the new visitor center project, including a large outdoor exhibit area that can accommodate large number of visitors during peak season. Fort Sumter National Monument, which is located on an island, required the construction of a unique transportation system—a boat dock from which visitors will travel to the site. The visitor center is currently being built on a dock that will provide boat rides to the site. Figure 7 shows the frame of the visitor center in November 2000, as well as the dock, all of which are expected be completed in August 2001. Depending on a park’s needs, parks have also added other functions, including headquarters space; space for the concessioners operating services in the parks, such as hotels, guided tours, gift shops, or restaurants; curatorial space; and museum space. For example, the Gettysburg project will house its Civil War collection in a new visitor center museum. Appendix II presents detail information on the 80 visitor center projects and the functions included in them. Finally, the size of the visitor center, measured by the square feet contained in the visitor center building, influences the total cost of the visitor center project. Table 4 shows the average square footage of the visitor center buildings by the cost ranges of the projects. On average, the visitor center projects in the higher cost ranges have much larger buildings. The 15 most costly projects have buildings with an average area of 28,228 square feet, while the 14 least costly projects average 6,747 square feet. The variation in visitor center project functions and size is partially due to the fact that the Park Service has not developed specific guidelines for what should be included in a visitor center project. Under the current Park Service policy on park facilities, visitor center projects may be constructed when necessary to provide visitor information and interpretive services. The policy generally describes what may be included in a visitor center, such as information services, sale of educational materials, museums, museum collections storage, exhibits, and other programs and spaces to create a quality visitor experience. The determination of the functions and size for a particular visitor center project is made initially by the park superintendent and is then subsequently reviewed and analyzed by the appropriate regional office and the construction program. Since 1996, the Park Service has also relied on an advisory board called the Development Advisory Board to review all construction projects over $500,000. Of the 80 projects, the Board has reviewed 37 and needs to review 30 projects. The remaining 13 projects predated the review process. The board reviews proposed project plans and cost estimates for projects, hears presentations from the park’s employees, and either forwards the project for Director approval or requests additional analyses. Projects that require additional analyses are sent back to the parks for revisions and additional work before returning to the board for review. To provide specific guidelines for the Development Advisory Board and the parks, the Park Service contracted with two architectural and engineering firms to develop construction planning criteria and preliminary cost guidance for Park Service facilities, including functions, square footage, and cost. One of the contractors is expected to provide guidelines for maintenance facilities to the Park Service in August 2001 and will continue working on guidelines for the other facilities, including visitor centers, in the upcoming year. When the guidelines are complete, the Park Service plans to have park staff use them to develop the scope of projects and the initial cost estimates, and plans to provide the guidelines to the Development Advisory Board for use in its future review of projects. The Park Service receives appropriations for planning, construction, and repair and rehabilitation, all of which can be used in the construction or renovation of visitor center projects. In addition, the Park Service has successfully generated supplemental funding from other sources, such as private partnerships, fee demonstration funds, federal highway funds, various other government entities, and others. Figure 8 shows the total funding for the 80 visitor center projects that has been or will be provided by source. Park Service funds represent the largest funding source for the 80 visitor center projects, contributing an estimated $322 million of the total estimated cost of $542 million. Private partnerships are the second largest source of funding for the visitor center projects, providing $97 million for visitor center projects. The Park Service can receive donations—including buildings—from private individuals or groups. Many parks have “Friends” groups or natural history associations that are interested in supporting the park by raising funds and developing important projects. After private partnerships, the third largest source of funds for visitor center projects is estimated to be fee demonstration funds, which are raised through additional or new fees charged by individual parks. For example, a park can adjust its entrance fees based on use or charge additional fees during peak seasons. Of the funds collected, the park can keep 80 percent, and the remaining 20 percent is put into a pool for which other parks can compete. Some parks received authority to raise fee demonstration funds in fiscal year 1996 and can spend these funds through 2005. The Park Service estimates that $48 million, or 9 percent, of the total funding for the 80 visitor center projects will be fee demonstration funds. Additional funding for visitor center projects comes from a number of different sources. Road construction funds from the Federal Highway Administration’s Federal Lands Highway Program provide an estimated $35 million, or 6 percent, of the total project funding. The highway program provides discretionary funding that can be used for, among other things, visitor center projects located on major roads. For example, funding for the visitor center project at Big Cypress National Preserve in Florida, which will cost $2.1 million, was provided from highway funds. Finally, funding for visitor center projects also comes from other federal agencies, state governments, concession owners, and Indian tribes. In total, other funding sources provide an estimated $40 million, or 7 percent, of funding for the 80 projects. For example, the largest single source of funding for the Home of Franklin D. Roosevelt project—$8.2 million—will come from the National Archives for the library portion of the project. Alternative sources of funding—such as private partnership funds, fee demonstration funds, or highway funds—can significantly benefit some projects, allowing them to be constructed perhaps several years before they would have received Park Service construction appropriations. Some projects receive small amounts of these alternative sources of funding, while other projects receive almost their entire funding from alternative sources. For example, Kennesaw Mountain received $520,000 for its renovation from its Friends group and the Kennesaw Mountain Historical Association, which represented about 25 percent of its total costs. On the other hand, the new visitor center project at the Grand Canyon used over $16 million, or 68 percent of its total construction costs, in fee demonstration funds raised by the park. The Park Service is experiencing increased activity in building projects that include visitor centers, and faces the challenge of constructing buildings that simultaneously serve the purposes of the individual parks and are built efficiently and in a cost-effective manner. The National Park Service has made efforts—through the establishment of the Development Advisory Board and the development of facility guidelines—to move the agency toward achieving these goals. The variation in the costs, size, and functions of projects that include visitor centers supports the Park Service’s efforts. We provided the Department of the Interior with a copy of our draft report for review and comment. Overall, Interior said that the report provides useful information that will be beneficial to the Park Service in planning, programming, design and construction of visitor centers and associated facilities. Interior said the report presents information in a “non-interpreted” way, but asserts that some of our data is incorrect and that some relevant information has not been included in the report. First, Interior believes that some of the data gathered with our survey and used in portions of the report are incorrect. We disagree. Our objectives were to provide information on the cost, functions, and funding for visitor center projects. Because Interior does not maintain a database with this information, it was necessary for us to first identify visitor center projects and then to gather specific information using a questionnaire to answer the study’s specific objectives. As we pointed out in our scope and methodology, we designed our questionnaire with the Park Service’s input and we discussed the questionnaire in detail with officials from 11 parks. To address potential inconsistencies or misinterpretations in responses from the parks, we followed up, as is our normal practice, with all parks that provided data that appeared to be inconsistent or misinterpreted. As a further check on the validity of the data, we corroborated the project cost and funding data with regional budget staff. We believe the data upon which the report is based are accurate. Other data which we gathered as part of the questionnaire and to which Interior is referring—data on the visitor center building costs—were not used in the report. We attempted to gather this data because Interior did not maintain the data. However, in discussing visitor center building costs with the parks and with Interior construction staff, we found that the data were subject to different interpretations and assumptions about what specific costs should be included. For example, parks used different interpretations on whether or not to include site development costs, which in its comments Interior points out can be a major cost in the overall visitor center project costs. Given that collecting specific data on visitor center building costs was not part of our overall objectives, and that the data are subject to different interpretations and assumptions, the data need to be clarified and studied in more detail as part of a separate review. Interior also believes that providing costs per square foot of the individual visitor center buildings is more meaningful than providing the overall costs of visitor center projects. We strongly disagree that information on visitor center building costs is more meaningful than the cost of the projects. As stated above, our purpose was to discuss the cost, functions, and funding sources for visitor center projects and not just visitor center buildings. The requesters asked that we gather data on overall visitor center projects because the total project costs reflect all costs related to developing and constructing a visitor center, which represent the cost to the taxpayer. Also, only in this way can the full range of visitor center project functions, including transportation facilities, be addressed. Although Interior states that cost-per-square-foot data is more meaningful than project costs, the Park Service has not developed a database containing this information. Furthermore, Interior asserts that the data could have been easily developed from data already accumulated. We disagree. We gathered, as part of our study, data that could be used to calculate the cost per square foot of individual visitor centers. However, because of various interpretations and the assumptions used in calculating the square foot costs of visitor center buildings, we ultimately decided not to report these data. We agree that cost-per-square-foot data on visitor center buildings are important and question why the Park Service has not yet developed the data. Interior notes that trends in visitor center costs and costs per square foot can be identified and that our report could have identified trends but did not do so. We disagree that trends can be identified. The trends that Interior says that it has identified are not trends, but are comparisons of average costs at two points in time. We attempted to develop trends by plotting total and average project costs by the year projects were completed, and as we stated in the report, were unable to discern a trend in costs because of the wide variation in projects. Finally, Interior asserts that parts of our discussion of its planning, design and construction processes are incomplete or incorrect. We believe that for the purpose of this study, general background information is needed to interpret the data and that we have provided complete information for this purpose. We did make technical changes, as appropriate, to address Interior’s specific comments on incorrect information related to these processes. Interior’s comments are presented in their entirety in appendix V. We conducted our review from November 2000 through June 2001 in accordance with generally acceptable government auditing standards. We are sending copies of this report to the Honorable Gale A. Norton, Secretary of the Interior; the Director of the National Park Service; and other interested parties. This report will be available on GAO’s home page at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-3841. Key contributors to this report were Fran Featherston, Cliff Fowler, Susan Iott, Chet Janik, and Bill Temmler. Our study included all National Park Service visitor center projects that had either been completed, were under construction, or were planned to be completed between fiscal year 1996 and fiscal year 2005 (as identified by January 2001). We selected fiscal year 1996 as a starting point because changes in the Park Service’s accounting and regional organization prior to 1996 made data difficult to obtain. We used fiscal year 2005 as our cutoff because projects that the Park Service is planning to complete beyond that year are less certain than projects that will be completed prior to that year as the projects have not been reviewed or prioritized by the agency. The Park Service’s 5-year construction plan, which extended through fiscal year 2005 at the time we were gathering information, includes the agency’s prioritized construction projects. First, to answer all three objectives—the number, reasons, costs, functions and sources of funding for the identified projects—we developed a questionnaire. To gather background data and to develop and pretest the questionnaire for our study, we visited or talked to officials at 11 national parks in Georgia, Tennessee, South Carolina, Utah, Arizona, Colorado, Virginia, and Pennsylvania. We chose parks that had visitor center projects in various stages of construction and that had a variety of functions. A copy of the questionnaire is included in appendix IV. Second, to determine the number of visitor centers built, renovated, and planned from fiscal year 1996, we worked with National Park Service officials to develop a current definition of a visitor center project. This was necessary because the Park Service did not have a specific definition of a visitor center project, but rather, has general guidance on what constituted a visitor center. We agreed with the Park Service that a visitor center project (1) must have a staffed facility that provides general information on the park, (2) must include administrative space for visitor center personnel plus four of the basic functions included in the guidance—an information desk, exhibits, publication sales, and restrooms, and (3) can include a number of other functions, including an auditorium, ticket sales and permits, transportation facilities, and other specialized uses, depending on the needs of the individual park. In addition, it was agreed that visitor contact stations that are not staffed by personnel or specialized facilities such as education centers or beach houses, would not be counted as visitor center projects. Third, using this definition, we reviewed Park Service budget and planning documents and interviewed Park Service construction officials to identify an initial set of visitor center projects. We then sent to each Park Service regional office a list of projects at parks in the respective regions for review. Through this process we identified 106 visitor center projects that were either completed, under construction, or planned to be completed during the period of fiscal year 1996 through fiscal year 2005. Fourth, we mailed questionnaires—one for each of 106 visitor center projects—to the 94 parks that had visitor center projects built or planned during our time frames. Some parks had more than one project built or planned. To corroborate that the visitor center projects met our specifications, we requested documentation for each project. One park identified a second project that fell within the study’s time frames and completed a questionnaire for that project, bringing the total number of identified projects to 107. However, 27 of the 107 projects were subsequently dropped from the study because the parks stated these projects did not fit into our universe for several different reasons, including the fact that the bulk of the visitor center project had been completed prior to 1996, the project had been redesigned and the visitor center portion eliminated, or the project would not be completed by 2005. This left 80 projects in the survey. We mailed the questionnaires on January 10, 2001 and obtained completed questionnaire responses for all 80 projects by March 16, 2001. Finally, to corroborate that we had received consistent funding and cost information for each project, we asked the budget staff from each of the Park Service’s seven regions to ensure that the parks in the region had reported costs and funding data in the same way. Specifically, we asked the regions to ensure that the funds included contingency and supervision costs and that the cost and fund data were in constant fiscal year 2000 dollars. We received corrections for our data through April 2001. Finally, we coordinated our work with the architecture and engineering contractor that the Park Service had hired to develop square footage and function standards for key park facilities, including visitor centers. We conducted our work from November 2000 through June 2001 in accordance with generally accepted government auditing standards. The following figure shows the functions, in addition to the five basic functions, that are included in the 80 visitor center projects that the Park Service has either completed, has under construction, or is planning to complete between fiscal years 1996 and 2005. The projects are grouped by whether or not they involve new construction or renovation of an existing building, and by the status of the project construction. The following table provides details on the funding sources for the 80 visitor center projects that the Park Service has completed, has under construction, or is planning. The projects are grouped by new construction or renovation of an existing building, and by the status of the project construction. We worked with the regional office budget staff to corroborate the funding data provided in the questionnaires and to ensure that funds were reported in constant fiscal year 2000 funds. The following are GAO’s comments on the Department of the Interior’s letter dated July 10, 2001. 1. We disagree. Our objectives were to provide information on the cost, functions, and funding for visitor center projects. Because the Park Service does not maintain a database with this information, it was necessary for us first to identify visitor center projects and then to gather specific information using a questionnaire to answer the study’s specific objectives. As we pointed out in our scope and methodology, we developed the questionnaire with input from the Park Service and discussed the questionnaire in detail with officials from 11 parks. To address potential inconsistencies or misinterpretations in responses from the parks, we followed up, as is our normal practice, with all parks that had provided data that appeared to be inconsistent or subject to misinterpretation. As a further check on the validity of the data, we corroborated the project cost and funding data with regional budget staff. Based on this, we believe that the data upon which the report is based are accurate. Visitor center building cost data, which we gathered as part of the questionnaire and to which Interior makes reference, were not used in the report. We gathered this data because the Park Service did not have them available. However, in discussing visitor center building cost data with the parks and with the Park Service construction staff, we found that the data are subject to different interpretations and assumptions about what specific costs should be included. For example, parks used different interpretations on whether or not to include site development costs, which Interior points out in its comments can be a major cost in the overall visitor center project costs. Given that the visitor center building cost information did not pertain to our overall objectives, and that the data are subject to different interpretations and assumptions, we decided that this data would need to be studied in more detail and included as part of a separate review. 2. We strongly disagree that information on visitor center building costs is more meaningful than the total cost of the projects. As stated above, our objectives were to discuss the cost, functions, and funding sources for visitor center projects, not buildings. Our purpose was not to provide data to allow comparisons with other agencies’ or organizations’ facilities, as Interior asserts would be possible if cost-per-square-foot data were available. The requesters asked that we gather data on overall visitor center projects because the total project costs reflect all costs related to developing and constructing a visitor center, and represent the cost to the taxpayer. In addition, in reviewing visitor center projects, the requesters are concerned that visitor center projects have an increasing number of functions. Although Interior states that the cost-per-square-foot data are more meaningful than project costs, it has not developed a database containing this information. Furthermore, Interior asserts that the data could have been easily developed from data already accumulated. We disagree because, as we pointed out above, the data are subject to interpretation and need to be clarified and studied in a separate review. 3. We believe that trends in visitor center project costs cannot be identified. Our attempt to develop trends by plotting total and average project costs by the year projects were completed, left us unable to discern a trend because of the wide variation in projects. Our comments regarding the trends that the Park Service says that it identified can be found in comment 18. 4. We disagree. We believe that for the purpose of this study, general background information is needed to interpret the data and that we have provided complete information for this purpose. We did make technical changes, as appropriate, to address Interior’s specific comments on incorrect information related to these processes. 5. We recognize that the Park Service’s planning criteria and preliminary cost guidance initiatives have a direct bearing on our report and as such, our draft report to Interior included a discussion of these initiatives. 6. We disagree. Our objectives were to discuss overall project costs and functions. The Park Service told us that costs could not be broken out by functions, such as transportation facilities, and therefore we could not provide costs by individual function. We decided that the selective reporting on one type of cost, such as site development cost was not warranted. 7. As part of our study, we attempted to include cost-per-square-foot data for visitor center buildings early on and were told that the Park Service does not maintain this data. We then attempted to collect data as part of our questionnaire that could be used to calculate the cost per square foot of individual visitor centers. However, because of various interpretations and the assumptions used in calculating square foot costs, we did not use the data that we developed. We agree that cost-per-square-foot data is important information and question why the Park Service has not yet developed the data. 8. See comment 7. 9. We coordinated with the Park Service in the development of our questionnaire and incorporated its changes where appropriate. Further testing also resulted in modifications to the questionnaire that provided as much consistency and clarity as possible to the terms used in the questionnaire. 10. We disagree that the Park Service was not kept informed of the development of our questionnaire. Based on our discussions with the Park Service, we were told that much of the data that we needed was available from the parks or the regions, as the park superintendents and regions are ultimately responsible for the completion and development of projects. As we point out in our scope and methodology description in appendix I, we discussed the questionnaire with officials at 11 parks, not a few parks as Interior indicates. We used our professional judgment and input from our professional survey design staff to make changes that were necessary to improve the questionnaire’s clarity. We do not typically share the respondents’ reactions while we are developing the questionnaire. 11. We agree that a full set of responses has never been shared with anyone in the Park Service. It is our policy not to share questionnaire responses and data with agencies until after we have completed our analysis and final report. In the questionnaire itself, we deliberately provide space for explanations of any unique circumstances and for any other information the respondents felt it necessary to convey. As a matter of practice, we follow up on questionnaire responses when we determine that it is necessary to clarify data. It is not unusual for respondents to provide handwritten comments on a questionnaire, even when they understand the questions, because respondents may want to further explain their answers. 12. While the Park Service says this information is the most significant data on visitor centers, it has not developed a database with this information. The Park Service was only able to calculate the data contained in its comments after we identified the 80 visitor center projects. Until the Park Service develops such a database, it will be unable to compare and benchmark its costs against those of other agencies and organizations. As previously stated, our objectives were not to provide data for comparisons and benchmarks with projects of other agencies and organizations. 13. The data to which Interior is referring are not GAO’s data and we cannot comment on their validity or make assertions about them because they were not available to us during the 8-month period of our review. 14. We disagree with the assertion that the data obtained have problems because they were gathered through a questionnaire. The data to which Interior is referring are data on visitor center building costs. We gathered data on visitor center building costs through a questionnaire to individual parks because the Park Service does not maintain a database of these costs. We noted that the calculation of these costs depends on certain assumptions, such as how much site development cost to include and whether to include management and contingency costs. Because of the inherent difficulties and the need for these assumptions to be clarified, we ultimately decided not to report these data. 15. As previously mentioned, the Park Service only developed the data in this section after we had completed our audit work. In our discussions with the Park Service about the cost-per-square-foot data included in the comments, the Park Service made certain assumptions about what costs to include or not to include. For example, the costs related to management, contingencies, or site development costs were not included in the calculations. The inclusion or exclusion of these costs can have a major impact on the cost per square foot of the facilities. As previously stated, because of these interpretations and assumptions, we believe that further study of this data is warranted. 16. We disagree. We have provided this perspective in other areas of the report, including a discussion of the park’s decision to renovate or replace a visitor center building. We used the term “old” to describe general conditions that could lead to the construction of a new building, including a new building to replace an existing building. We added, in response to the comments, a footnote with this technical definition. 17. We reported on the projects for which new buildings were being built. We did not make specific reference to projects for which a visitor center was the first in the park or in an area within a park. The construction of a new building is significantly different—and poses different challenges in the construction process—than the renovation of an existing building. We added a footnote to the report and to the table in appendix III that identifies the projects that are replacing existing visitor centers as opposed to providing a new building for a park. 18. We disagree. We do not find the Park Service’s comparison of the average cost of Mission 66 visitor centers with the average we estimated for the 80 visitor center projects in our report to be an acceptable trend analysis. An appropriate trend analysis would involve a time series—that is, data over a number of years—of comparable data. We do not believe that a comparison of two points, each an average of approximately 10 years of data, accurately demonstrates a trend. Also, we do not believe that data from Mission 66 visitor centers and our data on visitor center projects are comparable because our data consists of projects that include both construction of new buildings and renovation of existing buildings, while the Mission 66 visitor centers were all newly constructed. We attempted to develop trend information using the cost data for the projects for the 10-year period of this study, but as stated in the report, because of the variation in the projects, we were unable to discern a trend. Interior also asserts that we had a second and third opportunity to identify trends by comparing the size of visitor center on a functional basis and costs per square foot. As pointed out in the report, the Park Service has recently contracted for specifically this type of analysis and we did not want to duplicate these efforts. 19. We disagree. As shown in the report, the cost of a project to renovate a building is on average $4,392,000 while the cost of a project to construct a new building is on average $8,826,000. We also show in appendix II of the report that projects with renovations generally do not have as many functions as projects with new buildings. We do point out in the report that renovations are not always less expensive than projects with new construction, and we have highlighted instances when a visitor center renovation may be more costly than the construction of a new building. 20. Interior has misinterpreted what we wrote. We do not state that projects added by the Congress are ready for construction. Our point is that when the Congress identifies a project for construction, the Park Service works with the Congress to get the project ready for planning and construction. To avoid confusion, we clarified this language. 21. We believe that our discussion of these two initiatives is sufficient for the purposes of this report. Because we were not asked to review the process that the Park Service has in place to construct its facilities, nor the improvements that it is planning, we did not discuss these in detail. We do discuss the Park Service’s policy on park facilities, the responsibilities of the Development Advisory Board, and the initiatives underway by the Park Service to develop construction planning criteria and preliminary cost guidance for facilities, including visitor centers. As we point out in our observations, we believe that the initiatives the Park Service is undertaking, if implemented efficiently, are a step in the right direction. 22. We disagree that this is an incorrect conclusion. The variation in visitor center projects occurs in part because many of the projects are still in the stages of initial development and the Park Service relies on review of the projects after their development to correct scoping problems. The Department states that a lack of guidelines for parks does not result in inappropriately scoped projects because the Park Service has processes in place to ensure that the scope and size of visitor centers are appropriate. While it may be true that processes are in place to review visitor center projects and their scopes, without guidelines on the type and size of functions that can be included, projects can be overscoped or underscoped. If the Park Service had guidelines for what should be included in a visitor center project, there could be limits on the scope of the initial projects proposed by parks. 23. We agree and have changed the language of the report. 24. We agree and have added language to clarify that the parks identify a general project scope, meaning that they consider what functions they need and develop an estimate of their square footage needs. 25. We agree. We were referring to the predesign process and changed the text to reflect this. 26. We did not intend to say that the Park Service and the Congress identified completely separate groups of projects. We changed the language of the report to say that the Congress concurred with the Park Service’s projects and added its own projects. 27. We noted that the Park Service buildings are expected to have a long lifespan because the Park Service policy is to renovate and reuse buildings before they are replaced. We agree that elements may need to be renovated and that maintenance costs may become more expensive as the buildings age. We clarified the text to indicate that before 40 or 50 years elapse, maintenance and operation costs could become expensive and elements of the building may need to be updated. 28. We agree and clarified this section of the report to more clearly reflect the different stages of the planning and design process and to reflect the time at which the park makes this decision.
Visitor centers at the national parks are among the most important facilities run by the National Park Service. As existing visitor centers age and new parks are created, renovated or new facilities are needed. This report discusses (1) the number, the status, and the reasons for Park Service visitor center projects; (2) whether the projects involve new construction or the renovation of existing buildings; (3) whether these projects were designated priorities by the Park Service or by Congress; (4) the costs and functions of the projects; and (5) the funding sources for the projects. GAO found that from 1996 through 2005, the Park Service has completed or planned 80 projects to renovate or build new visitor centers. The renovations and new construction are intended to replace aging facilities and exhibits, to provide more space, and to handle rising numbers of visitors. Of the 80 projects, 53 were a priority of the Park Service and 27 were a priority of Congress. The Park Service estimates that the total cost of the 80 projects will be $542 million. The visitor center projects are funded primarily by the Park Service's appropriated funds. Other funding sources include private partnerships and fee demonstrations.
To ensure the trustworthiness, reliability, and character of personnel in positions with access to classified information, DOD relies on a multiphased personnel security clearance process. Figure 1 shows six phases that could be involved in determining whether to grant an actual or a potential job incumbent a clearance. The three phases shown in gray are those that are most transparent to individuals requesting an initial clearance, and they are the three phases that were the primary focus of the findings in this testimony. At the time of our September 2006 report, our independent analysis of timeliness data showed that industry personnel contracted to work for the federal government waited more than 1 year on average to receive top secret security clearances, and government statistics did not portray the full length of time it takes many applicants to obtain clearances. We found delays in all phases of the clearance process that we examined, and government statistics did not account for the full extent of the delays. Delays in the clearance process may cost money and pose threats to national security (see table 1). As table 1 shows, industry personnel granted eligibility for top secret clearances from DISCO from January to February 2006 waited an average of 446 days for their initial clearances or 545 days for their clearance updates. DOD may, however, have issued interim clearances to some of these industry personnel, which might have allowed them to begin work before they received their final clearances. IRTPA requires that beginning in December 2006, 80 percent of clearances be completed in an average of 120 days. Delays were found in each phase of the clearance process that we examined: Application submission. The application-submission phase of the clearance process took an average of 111 days for the initial clearances that DISCO adjudicated in January and February 2006 (see table 1). The starting point for our measurement of this phase was the date when the application was submitted by the facility security officer. Our end point for this phase was the date that OPM scheduled the investigation into its Personnel Investigations Processing System. We used this starting date because the government can begin to incur an economic cost if an industry employee cannot begin work on a classified contract because of delays in obtaining a security clearance and this end date because OPM currently uses this date as its start point for the next phase in the clearance process. The government plan for improving the clearance process noted that “investigation submission” (i.e., application submission) is to be completed within an average of 14 calendar days or less. Therefore, the 111 days taken for the application-submission phase was nearly 100 more days on average than allocated. Several factors contributed to the amount of time we observed in the application- submission phase, including rejecting applications multiple times because of inaccurate information (as reported in an April 2006 DOD Office of Inspector General report); multiple completeness reviews—the corporate facility security officer, DISCO adjudicators, and OPM staff; and manually entering data from paper applications if eQIP was not used. Investigation. Investigations for the initial top secret clearances of industry personnel adjudicated in January and February 2006 took an average of 286 days, compared to OMB’s 180-day goal for that period (see table 1). During the same period, investigations for top secret clearance updates or “reinvestigations” took an average of 419 days, almost one and a half times as long as the initial investigations (no goal is given for clearance updates or reinvestigations). The mandated February 2007 OMB report to Congress noted that “Reinvestigation timeliness has not been addressed, because the improvement effort focused on individuals for whom initial security clearances are required to perform work.” Our September 2006 report identified many factors that inhibited the speed with which OPM can deliver investigative reports to DISCO and other adjudication facilities. Those causes included backlogged cases that prevent the prompt start of work on new cases, the relative inexperience of the investigative workforce, slowness in developing the capability to investigate overseas leads, and difficulty obtaining access to data in governmental records. Adjudication. DISCO adjudicators took an average of 39 days to grant initial clearance eligibility to the industry personnel in our population (see table 1). The measurement of this phase for our analysis used the same start and stop dates that OPM uses in its reports, starting on the date that OPM closed the report and continuing through the date that DISCO adjudicators decided clearance eligibility. IRTPA requires that at least 80 percent of the adjudications made from December 2006 through December 2009 be completed within an average of 30 days. As of June 2006, DISCO reported that it had adjudicated 82 percent of its initial top secret clearances within 30 days. Delays in any phase of the clearance process cost money and threaten national security. Delays in completing initial security clearances may have a negative economic impact on the costs of performing classified work within or for the U.S. government. For example, in a May 2006 congressional hearing, a representative of a technology association testified that retaining qualified personnel resulted in salary premiums as high as 25 percent for current clearance holders. Delays in completing clearance updates can have serious but different negative consequences than those stemming from delays in completing initial clearance-eligibility determinations. In 1999, the Joint Security Commission reported that delays in initiating reinvestigations for clearance updates create risks to national security because the longer individuals hold clearances the more likely they are to be working with critical information. The statistics that OMB and OPM have provided to Congress on the timeliness of the personnel security clearance process do not convey the full magnitude of the investigation-related delays facing the government. While our September 2006 report noted additional problems with the transparency of the timeliness statistics, I will review our concerns about five such issues: (1) limited information on reinvestigations for clearance updating, (2) not counting the total number of days to finish the application-submission phase, (3) shifting some investigation-related days to the adjudication phase or not counting them, (4) not counting the total number of days to complete closed pending cases, and (5) not counting the total number of days to complete investigations sent back for rework. Limited information on reinvestigations for clearance updating. In its mandated February 2007 report to Congress, OMB acknowledged that “reinvestigation timeliness has not been addressed,” but the findings from our population of industry personnel (obtained using DOD’s, instead of OPM’s, database to assess timeliness) indicated that clearance update reinvestigations took about one and a half times as long as the initial investigations. The absence of timeliness information on clearance update reinvestigations does not provide all stakeholders—Congress, agencies, contractors attempting to fulfill their contracts, and employees awaiting their clearances—with a complete picture of clearance delays. We have noted in the past that focusing on completing initial clearance investigations could negatively affect the completion of clearance update reinvestigations and thereby increase the risk of unauthorized disclosure of classified information. Not counting all days to finish the application-submission phase. OMB’s February 2007 report noted that its statistics do not include “the time to hand-off applications to the investigative agency.” The gray section of the application-submission phase in table 1 shows some of the activities that were not counted when we examined January and February 2006 clearance documentation for industry personnel. These activities could be included in timeliness measurements depending on the interpretation of what constitutes “receipt of the application for a security clearance by an authorized investigative agency”—IRTPA’s start date for the investigation phase. Shifting some investigation-related days to the adjudication phase or not counting them. In our September 2006 report, we raised concerns about how the time to complete the adjudication phase was measured. The activities in the gray section of the adjudication phase in table 1 show that the government’s procedures for measuring the time required for the adjudication phase include tasks that occur before adjudicators actually receive the investigative reports from OPM. More recently, OMB’s February 2007 report to Congress noted that its timeliness statistics do not include “the time to … hand-off investigation files to the adjudicative agency” and estimated this handling and mailing time at up to 15 days. Not counting all days for closed pending cases. OPM’s May 2006 testimony before Congress did not indicate whether the timeliness statistics on complete investigations included a type of incomplete investigation that OPM sometimes treats as being complete. In our February 2004 report, we noted that OPM’s issuance of “closed pending” investigations—investigative reports sent to adjudication facilities without one or more types of source data required by the federal investigative standards—causes ambiguity in defining and accurately estimating the backlog of overdue investigations. In our February 2004 report, we also noted that cases that are closed pending the provision of additional information should continue to be tracked separately in the investigation phase of the clearance process. According to OPM, from February 20, 2005, through July 1, 2006, the number of initial top secret clearance investigative reports that were closed pending the provision of additional information increased from 14,841 to 18,849, a 27 percent increase. DISCO officials and representatives from some other DOD adjudication facilities have indicated that they will not adjudicate closed pending cases since critical information is missing. OPM, however, has stated that other federal agencies review the investigative reports from closed pending cases and may determine that they have enough information for adjudication. Combining partially completed investigations with fully completed investigations overstates how quickly OPM is supplying adjudication facilities with the information they require to make their clearance-eligibility determinations. Not counting all days when inadequate investigations are returned. OMB’s February 2007 report stated that its statistics do not include the time incurred to “return the files to the investigative agency for further information.” OPM’s procedure is to restart the measurement of investigation time for the 1 to 2 percent of investigative reports that are sent back for quality control reasons, which does not hold OPM fully accountable for total investigative time when deficient products are delivered to its customers. In fact, restarting the time measurement for reworked investigations could positively affect OPM’s statistics if the reworked sections of the investigation take less time than did the earlier effort to complete the large portion of the investigative report. IRTPA establishes timeliness requirements for the security clearance process. Specifically, it states that “each authorized adjudicative agency shall make a determination on at least 80 percent of all applications for a personnel security clearance pursuant to this section within an average of 120 days after the date of receipt of the application for a security clearance by an authorized investigative agency.” IRTPA did not identify situations that could be excluded from mandated timeliness assessments. Without fully accounting for the total time needed to complete the clearance process, Congress will not be able to accurately determine whether agencies have met IRTPA-mandated requirements or determine if legislative actions are necessary. OPM provided incomplete investigative reports to DOD adjudicators, which they used to determine top secret clearance eligibility. Almost all (47 of 50) of the sampled investigative reports we reviewed were incomplete based on requirements in the federal investigative standards. In addition, DISCO adjudicators granted clearance eligibility without requesting additional information for any of the incomplete investigative reports and did not document that they considered some adjudicative guidelines when adverse information was present in some reports. Granting clearances based on incomplete investigative reports increases risks to national security. In addition, use of incomplete investigative reports and not fully documenting adjudicative considerations may undermine the government’s efforts to increase the acceptance of security clearances granted by other federal agencies. In our review of 50 initial investigations randomly sampled from the population used in our timeliness analyses, we found that 47 of 50 of the investigative reports were missing documentation required by the federal investigative standards. The missing data were of two general types: (1) the absence of documentation showing that an investigator gathered the prescribed information in each of the applicable 13 investigative areas and included requisite forms in the investigative report and (2) the absence of information to help resolve issues (such as conflicting information on indebtedness) that were raised in other parts of the investigative report. The requirements for gathering these types of information were identified in federal investigative standards published about a decade ago. At least half of the 50 reports did not contain the required documentation in 3 investigative areas: residence (33 of 50), employment (32), and education (27). In addition, many investigative reports contained multiple deficiencies within each of these areas. For example, multiple deficiencies might be present in the residence area because investigators did not document a rental record check and an interview with a neighborhood reference. Moreover, 44 of the 50 investigative reports had 2 to 6 investigative areas out of a total of 13 areas with at least one piece of missing documentation. We also found a total of 36 unresolved issues in 27 of the investigative reports. The three investigative areas with the most unresolved issues were financial consideration (11 of 50 cases), foreign influence (11), and personal conduct (7). Federal standards indicate that investigations may be expanded as necessary to resolve issues. According to OPM, (1) issue resolution is a standard part of all initial investigations and periodic reinvestigations for top secret clearances and (2) all issues developed during the course of an investigation should be fully resolved in the final investigative report provided to DOD. One investigative report we examined serves as an example of the types of documentation issues we found during our review. During the course of this particular investigation, the subject reported having extramarital affairs; however, there was no documentation to show that these affairs had been investigated further. Also, the subject’s clearance application indicated cohabitation with an individual with whom the subject had previously had a romantic relationship, but there was no documentation that record checks were performed on the cohabitant. Moreover, information in the investigative report indicated that the subject had defaulted on a loan with a balance of several thousand dollars; however, no other documentation suggested that this issue was explored further. When we reviewed this and other deficient investigative reports with OPM Quality Management officials, they agreed that the investigators should have included documentation to resolve the issues. While we found that the interview narratives in some of the 50 OPM investigative reports were limited in content, we did not identify them as being deficient for the purposes of our analysis because such an evaluation would have required a subjective assessment that we were not willing to make. For example, in our assessment of the presence or absence of documentation, we found a 35-word narrative for a subject interview of a naturalized citizen from an Asian country. It stated only that the subject did not have any foreign contacts in his birth country and that he spent his time with family and participated in sports. Nevertheless, others with more adjudicative expertise voiced concern about the issue of documentation adequacy. Top officials representing DOD’s adjudication facilities with whom we consulted were in agreement that OPM-provided investigative summaries had been inadequate. When we reviewed our findings in meetings with the Associate Director of OPM’s investigations unit and her quality management officials they cited the inexperience of the rapidly expanded investigative workforce and variations in training provided to federal and contractor investigative staff as possible causes for the incomplete investigative reports we reviewed. Later, in official agency comments to our September 2006 report, OPM’s Director indicated that some of the problems that we reported were the result of transferred staff and cases when OPM accepted DOD investigative functions and personnel. However, OPM had had 2 years to prepare for the transfer between the announced transfer agreement in February 2003 and its occurrence in February 2005. Furthermore, the staff and cases were under OPM control until the investigative reports were subsequently transferred to OPM for adjudication in January or February of 2006. In addition, 47 of the 50 investigative reports that we reviewed were missing documentation even though OPM had quality control procedures for reviewing the reports before they were sent to DOD. In our November 2005 testimony evaluating the government plan for improving the personnel security clearance process, we stated that developers of the plan may wish to consider adding other indicators of the quality of investigations. During our review, we asked the Associate Director of OPM’s Investigations Unit if OMB and OPM had made changes to the government plan to address quality measurement and other shortcomings we identified. OPM’s Associate Director said that the plan had not been modified to address our concerns but that implementation of the plan was continuing. Our review found that DISCO adjudicators granted top secret clearance eligibility for all 47 of the 50 industry personnel whose investigative reports did not have full documentation. In making clearance-eligibility determinations, the federal guidelines require adjudicators to consider (1) guidelines covering 13 specific areas, such as foreign influence and financial considerations; (2) adverse conditions or conduct that could raise security concerns and factors that might mitigate (alleviate) the condition for each guideline; and (3) general factors related to the whole person. According to a DISCO official, DISCO and other DOD adjudicators are to record information relevant to each of their eligibility determinations in JPAS. They do this by selecting applicable guidelines and mitigating factors from prelisted responses and may type up to 3,000 characters of additional information. The adjudicators granted eligibility for the 27 industry personnel whose investigative reports (discussed in the prior section) contained unresolved issues without requesting additional information or documenting in the adjudicative report that the information was missing. The following is an example of an unresolved foreign influence issue, which was not documented in the adjudicative report, although DISCO officials agreed that additional information should have been obtained to resolve the issue before the individual was granted a top secret clearance. A state-level record check on an industry employee indicated that the subject was part owner of a foreign-owned corporation. Although the DISCO adjudicator applied the foreign influence guideline for the subject’s foreign travel and mitigated that foreign influence issue, there was no documentation in the adjudicative report to acknowledge or mitigate the foreign-owned business. When we asked why adjudicators did not provide the required documentation in JPAS, the DISCO officials as well as adjudication trainers said that adjudicators review the investigative reports for sufficient documentation to resolve issues and make judgment calls about the amount of risk associated with each case by weighing a variety of past and present, favorable and unfavorable information about the person to reach an eligibility determination. Seventeen of the 50 adjudicative reports were missing documentation on a total of 22 guidelines for which issues were present in the investigative reports. The missing guideline documentation was for foreign influence (11), financial considerations (5), alcohol consumption (2), personal conduct issues (2), drug involvement (1), and foreign influence (1). DISCO officials stated that procedural changes associated with JPAS implementation contributed to the missing documentation. DISCO began using JPAS in February 2003, and it became the official system for all of DOD in February 2005. Before February 2005, DISCO adjudicators were not required to document the consideration of a guideline issue unless the adverse information could disqualify an individual from being granted a clearance eligibility. After JPAS implementation, DISCO adjudicators were trained to document in JPAS their rationale for the clearance determination and any adverse information from the investigative report, regardless of whether an adjudicative guideline issue could disqualify an individual from obtaining a clearance. The administrators also attributed the missing guideline documentation to a few adjudicators attempting to produce more adjudication determinations. Decisions to grant clearances based on incomplete investigations increase risks to national security because individuals can gain access to classified information without being vetted against the full federal standards and guidelines. Furthermore, if adjudication facilities send the incomplete investigations back to OPM for more work, the adjudication facilities must use adjudicator time to review cases more than once and then use additional time to document problems with the incomplete investigative reports. Incomplete investigations and adjudications undermine the government’s efforts to move toward greater clearance and access reciprocity. An interagency working group, the Security Clearance Oversight Steering Committee, noted that agencies are reluctant to be accountable for poor quality investigations, adjudications conducted by other agencies or organizations, or both. To achieve fuller reciprocity, clearance-granting agencies need to have confidence in the quality of the clearance process. Without full documentation of investigative actions, information obtained, and adjudicative decisions, agencies could continue to require duplicative investigations and adjudications. Incomplete timeliness data limit the visibility of stakeholders and decision makers in their efforts to address long-standing delays in the personnel security clearance process. For example, not accounting for all of the time used when personnel submit an application multiple times before it is accepted limits the government’s ability to (1) accurately monitor the time required for each step in the application-submission phase and (2) identify positive steps that facility security officers, DISCO adjudicators, OPM investigative staff, and other stakeholders can take to speed the process. The timeliness-related concerns identified in my testimony show the fragmented approach that the government has taken to addressing clearance problems. When I testified before this Subcommittee in November 2005, we were optimistic that the government plan for improving the clearance process prepared under the direction of OMB’s Deputy Director for Management would be a living document that would provide the strategic vision for correcting long-standing problems in the personnel security clearance process. However, nearly 2 years after first commenting on the plan, we have not been provided with a revised plan that lays out how the government intends to address the shortcomings that we identified in the plan during our November 2005 testimony. Continued failure to address the shortcomings we have cited could significantly limit the positive impact that the government has made in other portions of the clearance process through improvements such as hiring more investigators and promoting reciprocity. While eliminating delays in the clearance process is an important goal, the government cannot afford to achieve that goal by providing investigative and adjudicative reports that are incomplete in the key areas required by federal investigative standards and adjudicative guidelines. Also, the incomplete investigative and adjudicative reports could suggest to some security managers that there is at least some evidence to support agencies’ concerns about the risks that may come from accepting the clearances issued by other federal agencies, and thereby negatively affect OMB’s efforts toward achieving greater reciprocity. Further, as we pointed out in November 2005, the almost total absence of quality metrics in the governmentwide plan for improving the clearance process hinders Congress’s oversight of these important issues. Finally, the missing documentation could have longer-term negative effects, such as requiring future investigators and adjudicators to devote time to obtaining the documentation missing from current reviews when it is time to update the clearances currently being issued. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information regarding this testimony please contact me at (202)512-5559 or stewartd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Jack E. Edwards, Assistant Director; Kurt A. Burgeson; Nicolaas C. Cornelisse; Alissa H. Czyz; Ronald La Due Lake; Beverly C. Schladt; and Karen D. Thornton. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. GAO’s High-Risk Program. GAO-06-497T. Washington, D.C.: March 15, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06- 233T. Washington, D.C.: November 9, 2005. Defense Management: Better Review Needed of Program Protection Issues Associated with Manufacturing Presidential Helicopters. GAO-06- 71SU. Washington, D.C.: November 4, 2005. Questions for the Record Related to DOD’s Personnel Security Clearance Program. GAO-05-988R. Washington, D.C.: August 19, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO's High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Individuals working for the private industry are playing a larger role in national security work conducted by Department of Defense (DOD) and other federal agencies. As of May 2006, industry personnel held about 34 percent of DOD-maintained personnel security clearances. The damage that the unauthorized disclosure of classified information can cause to national security necessitates the prompt and careful consideration of who is granted a security clearance. Long-standing delays in determining clearance eligibility and other challenges led GAO to designate the DOD personnel security clearance program as a high-risk area in January 2005 and again in GAO's January 2007 update of the high-risk areas. In February 2005, DOD transferred its security clearance investigations functions to the Office of Personnel Management (OPM) and now obtains almost all of its clearance investigations from OPM. The Office of Management and Budget (OMB) is responsible for effective implementation of policy relating to determinations of eligibility for access to classified information. This testimony addresses the timeliness of the process and completeness of documentation used to determine eligibility of industry personnel for top secret clearances in January and February 2006. This statement relies primarily on GAO's September 2006 report (GAO-06-1070). GAO's analysis of timeliness data showed that industry personnel contracted to work for the federal government waited more than 1 year on average to receive top secret clearances, longer than OMB- and OPM-produced statistics would suggest. GAO's analysis of 2,259 cases in its population showed the process took an average of 446 days for initial clearances and 545 days for clearance updates. While the government plan has a goal for the application-submission phase of the process to take 14 days or less, it took an average of 111 days. In addition, GAO's analyses showed that OPM used an average of 286 days to complete initial investigations for top secret clearances, well in excess of the 180-day goal specified in the plan that OMB and others developed for improving the clearance process. Finally, the average time for adjudication (determination of clearance eligibility) was 39 days, compared to the 30-day requirement that began in December 2006. An inexperienced investigative workforce, not fully using technology, and other causes underlie these delays. Delays may increase costs for contracts and risks to national security. In addition, statistics that OMB and OPM report to Congress on the timeliness of the clearance process do not portray the full length of time it takes many applicants to receive a clearance. GAO found several issues with the statistics, including limited information on reinvestigations for clearance updating and failure to measure the total time it took to complete the various phases of the clearance process. Not fully accounting for all the time used in the process hinders congressional oversight of the efforts to address the delays. OPM provided incomplete investigative reports to DOD, and DOD personnel who review the reports to determine a person's eligibility to hold a clearance (adjudicators) granted eligibility for industry personnel whose investigative reports contained unresolved issues, such as unexplained affluence and potential foreign influence. In its review of 50 investigative reports for initial clearances, GAO found that that almost all (47 of 50) cases were missing documentation required by federal investigative standards. Moreover, federal standards indicate expansion of investigations may be necessary to resolve issues, but GAO found at least one unresolved issue in 27 of the reports. GAO also found that the DOD adjudicators granted top secret clearance eligibility for all 27 industry personnel whose investigative reports contained unresolved issues without requesting additional information or documenting in the adjudicative report that the information was missing. In its November 2005 assessment of the government plan for improving the clearance process, GAO raised concerns about the limited attention devoted to assessing quality in the clearance process, but the plan has not been revised to address the shortcomings GAO identified. The use of incomplete investigations and adjudications in granting top secret clearance eligibility increases the risk of unauthorized disclosure of classified information. Also, it could negatively affect efforts to promote reciprocity (an agency's acceptance of a clearance issued by another agency) being developed by an interagency working group headed by OMB's Deputy Director.
Humanitarian parole—in the context of immigration—refers to official permission for an otherwise inadmissible alien to legally enter the United States temporarily. This includes aliens required to have a visa to visit or immigrate to the United States who are unable to obtain one, either due to ineligibility or urgent circumstances that make it impractical to apply for one. Specifically, the Immigration and Nationality Act grants the Secretary of Homeland Security discretionary authority to parole an alien into the United States temporarily on a case-by-case basis for urgent humanitarian reasons, such as to obtain medical treatment not available in his or her home country, visit a dying relative, or reunify young children with relatives. Granted for a maximum of 1 year, humanitarian parole does not constitute permanent admission of the alien into the country. Once the purpose of the parole is fulfilled, the alien is to leave the United States. According to the associated HAB protocols for adjudicating humanitarian parole applications, humanitarian parole is an extraordinary measure, to be used sparingly and not to circumvent normal visa-issuing procedures. The humanitarian parole application process begins when HAB receives an application and supporting evidence (e.g., a doctor’s statement regarding a physical ailment or a death certificate for a family member) from the requester, who may be the applicant, the applicant’s attorney, or someone applying on the applicant’s behalf. Upon receiving an application, a HAB staff member checks to ensure that the applicant is seeking humanitarian parole, the required information is entered on the application form (Form I-131), and the package includes the $305 application processing fee. If the application is complete, the HAB staff member enters the information from the Form I-131 into the PCTS database. In turn, the PCTS generates a letter to confirm receipt of the application and assigns a case number. The adjudicator then runs a security check on both the applicant (called the beneficiary) and the person requesting humanitarian parole, if different from the applicant, against two federally operated security databases. If there is no match with immigration or national security databases indicating a security issue with the person(s) applying for humanitarian parole, the HAB Chief (or designee) signs the confirmation-of-receipt letter, which is sent to the applicant or the person applying on his or her behalf. The HAB staff then create a working case file. Urgent cases, such as those related to medical treatment, are placed in red folders and given higher priority over less urgent cases, which are placed in green folders. HAB officials told us that urgent cases are processed immediately. Figure 1 illustrates the process for adjudicating applications for humanitarian parole. The 8,748 humanitarian parole applications that HAB adjudicated from October 1, 2001, through June 30, 2007, displayed various characteristics, and grant and denial rates did not differ for most of them, although there were some differences in adjudicator recommendations. Specifically, 54 percent were female; 46 percent, male. Forty-five percent of the applicants came from 11 countries, with Mexico having the greatest number of applicants. Most, 68 percent, were under the age of 40. Sixty-four percent of the requests for humanitarian parole were for two reasons—family reunification (49 percent) and medical emergency (15 percent). Of the 8,748 adjudicated applications, 6,615, or about 76 percent, were denied. We estimate that 57 percent of the denials specified as a reason that the applicant had not first exhausted all other avenues of immigration, such as applying for a visa, and that in 13 percent of the denials, applicants had committed an infraction of immigration law or other crime—both of which are generally disqualifying factors, absent what the USCIS Web site on humanitarian parole describes as “a very compelling emergency.” We found few differences in the granting or denial rates with regard to the demographic characteristic of gender and, with two exceptions, with regard to country of residence. However, we did find a difference in adjudication decisions for applicants under age 18, who had a higher grant rate than other age groups. This is consistent with the stated purposes of humanitarian parole and the HAB protocols that facilitate family reunification of minors in circumstances of compelling humanitarian need. There were some differences in grant/denial recommendation rates among adjudicators, with a denial recommendation rate of 66 to 84 percent for the 6 adjudicators with the greatest workload who made 15,000 adjudication recommendations from fiscal year 2002 through June 30, 2007, or 84 percent of all adjudicator recommendations. However, there was considerably greater variation among those who adjudicated fewer cases, with denial rates ranging from 43 percent to 93 percent of total recommendations among 18 other adjudicators who made 2,957 recommendations, or 16 percent of the total. From October 1, 2001, through June 30, 2007, HAB adjudicated 8,748 applications for humanitarian parole; of these, 24 percent were granted humanitarian parole, while 76 percent were denied parole. Table 1 displays data on humanitarian parole adjudication decision outcomes from fiscal years 2002 through 2007. Fifty-four percent of the humanitarian parole applicants were female and 46 percent were male. The gender ratios were generally consistent year to year, with the exception of fiscal year 2005 when 51 percent of applicants were male and 49 percent were female. Table 2 shows the number of humanitarian applicants by gender for fiscal years 2002 through 2007. Individuals from 167 different countries applied for humanitarian parole. Of the 8,748 applicants, 3,933 or 45 percent, were from 11 countries; 4,632 applicants or 53 percent, were residents of 156 other countries, and no country of residence was listed in PCTS for 183 applicants (2 percent). Residents of Mexico constituted the largest number of humanitarian parole applicants, about 9 percent. Table 3 provides data on the number of final adjudications by country of residence for the top eleven countries. Most of the applicants for humanitarian parole were under age 40. Of the 8,692 applicants for whom the application contained data on their age in PCTS, 5,966, or 68 percent, were under age 40. Twenty-seven percent of all applicants were under the age of 18. Table 4 shows the number of humanitarian parole applicants by age group. HAB officials identified four broad reasons for humanitarian parole applications: (1) life-threatening medical emergencies; (2) family reunification for compelling humanitarian reasons; (3) emergent, such as to visit an ill family member, or to resolve matters associated with the death of a relative; and (4) “other,” such as a caregiver needed to care for someone in the United States. We estimated that 64 percent of the requests for humanitarian parole were for two reasons—family reunification for compelling humanitarian reasons (49 percent) and medical emergency (15 percent). Figure 2 shows the percentage of applications adjudicated by reason for the request for fiscal years 2002 through 2007, based on a probability sample of 462 cases that we reviewed. The PCTS database shows that since fiscal year 2002, 76 percent of all applicants were denied humanitarian parole. Based upon our review of the narrative summaries in our sample of denied applications, we identified 10 reasons adjudicators cited when recommending a humanitarian parole application be denied. HAB officials agreed that these categories represented the reasons for denial; they noted that because their decisions are discretionary, none of these reasons are in and of themselves automatically disqualifying. Rather, these are the reasons cited in the probability sample as the basis of the reasoning by the HAB adjudicators as leading to their denial recommendation. The 10 categories were: The applicant had not exhausted alternative immigration processes available to them for which they might have been eligible, such as obtaining a visa, absent urgent circumstances that made it impractical to do so. The applicant provided no evidence supporting an emergent condition, such as a death certificate in the case where the request was to attend a funeral. The applicant provided no or inadequate evidence to support the reason for the request for humanitarian parole, such as a claimed medical emergency. The applicant had committed a prior immigration violation or other criminal violation. The purpose of the parole was not temporary in nature. That is, HAB believed that the applicant intended to stay in the United States beyond the duration of a parole period. Other family members already in the United States could provide care to the person intended to benefit from the presence of the applicant. The needed medical treatment was available outside the United States. There was insufficient evidence of adequate financial support to prevent the applicant from becoming a public charge while in the United States. The applicant provided no proof of familial relationship in cases where a family relationship was claimed as the basis of the application. Other: This was for applications that did not fall into the other categories. For example, other cases included when persons already approved for humanitarian parole mistakenly applied to HAB for an extension of their parole period rather than apply with a local USCIS district office. Another example was when an applicant for lawful permanent residency departed the United States without first obtaining the needed permission from USCIS and then applied for humanitarian parole to re-enter the United States, a situation that is not valid grounds for humanitarian parole. In recommending that an application be denied, adjudicators sometimes cited more than one reason in explaining their recommendation. For example, an adjudicator may have cited both that the applicant had not exhausted alternative immigration processes available and that the applicant provided no evidence supporting an emergent condition. Table 5 below shows the estimated percentage of applications where a particular reason for denial was cited. Table 5 shows that an estimated 57 percent of the denials had as a reason that the applicant had not first exhausted other avenues of immigration, such as applying for a visa, absent urgent circumstances that made it impractical to do so. Table 5 also shows that an estimated 46 percent of all the denied applicants had not provided evidence of an emergent condition and that an estimated 13 percent of denied applicants had committed an infraction of immigration law or other crime. These and the other reasons cited are generally disqualifying factors in applications for humanitarian parole. HAB has considerable discretion in adjudicating humanitarian parole applications. According to HAB’s guidance on adjudicating humanitarian parole applications, exercising discretion involves the ability to consider all factors in making a decision on whether a parole request rises to the level of an urgent humanitarian reason. The exercise of discretion requires that an adjudicator take into account applicable immigration law, regulations, policy and a consideration of the totality of the circumstances of the case including any significant mitigating factors. Most importantly, according to the guidance, discretionary decisions on humanitarian parole applications should be reached in a fair, equitable, and objective manner. We analyzed the PCTS data to determine whether there were differences in grant and denial rates according to applicants’ gender, country of residence, age, and by adjudicators. The latter factor—the adjudicator involved—must be considered in the context of the adjudication process, which requires that each application be reviewed by two different adjudicators and that if the first two adjudicators disagree in their recommendation, a third adjudicator then reviews the application and makes a recommendation. Then, the HAB Branch Chief or a designee is required to provide supervisory review and make the final decision. Therefore, while individual adjudicators could vary in their recommendations, the internal control system is set up to ensure that no single adjudicator has a decisive role in the outcome decision. (We discuss these internal controls later in this report.). Our analysis showed virtually no difference in the grant and denial rates according to applicants’ gender. With regard to country of residence, of the 11 foreign nations from which most applicants applied, applicants from Haiti had a lower rate of approval than the others while those from Cuba, El Salvador, India, Iran, Iraq, and Mexico had almost identical rates, and applicants from Lebanon had the highest grant rate. HAB officials attributed the lower rate for Haitians to special immigration eligibility rules for Haitians that were not well understood by applicants and the higher rate for Lebanese residents to special humanitarian circumstances resulting from evacuations associated with the July 2006 conflict in southern Lebanon between Israel and Hezbollah. Humanitarian parole-granting rates were higher for applicants under age 18 than they were for adults, consistent with HAB protocols and practices that favor reunification of children under age 18 with parents or close relatives. Grant and denial recommendation rates by individual adjudicators varied, with greater variation among those who adjudicated fewer cases. According to HAB officials, variations were expected in the grant/denial recommendation rates among adjudicators, since the facts and circumstances of each application varied and adjudicators do not all review the same applications. However, these officials stated that the application process had been designed with multiple checks to ensure that no single person would be able to unfairly influence the decision outcome, and that informal roundtable discussions among many staff were also used to deal with particularly difficult cases. As a result, they said, while grant/denial recommendation rates could vary by adjudicator, the process had been set up to achieve outcomes based on what amounts to a consensus, rather than being the product of a single adjudicator’s recommendation. For fiscal years 2002 to 2007, there were few differences in the annual grant/denial rates for male and female applicants, in adjudicated humanitarian parole decisions, with the exception of fiscal year 2005, when the grant rate for females was 21 percent and the grant rate for males was 24 percent. Table 6 shows the yearly approval and denial rates by gender. With two exceptions, there were few differences in the adjudication outcomes for grant or denial of humanitarian parole applications by country of residence. With the exception of applicants from Haiti and Lebanon, denial rates for the 11 countries that had the most applicants ranged from 68 percent to 82 percent, compared to the overall denial rate of 76 percent. The denial rate for Haitian applicants was 92 percent; in contrast, applicants from Lebanon had the lowest denial rate—45 percent. According to HAB officials, the higher denial rate for Haitians may be in part a result of a high number of applications made by Haitians applying for humanitarian parole on behalf of relatives who did not qualify as derivative beneficiaries (spouses and dependent children) under the Haitian Refugee Immigration Fairness Act (HRIFA) of 1998. For example, an applicant might have applied on behalf of a sibling or extended relative who did not meet the requirements of the Act or those of the humanitarian parole program. With respect to applicants from Lebanon, HAB officials told us that the July 2006 conflict between Israel and Hezbollah had generated applications for humanitarian parole under special urgent circumstances that probably produced a high grant rate. Table 7 shows the percentage of humanitarian parole applications granted and denied by the 11 countries from which the most applicants originated as well as for the total program for fiscal years 2002 through 2007. One of the reasons individuals can request humanitarian parole is to reunite young children with family members. HAB officials told us that they have followed a practice of applying this policy to those who are under age 18, since 18 is the age of majority in many countries. Consistent with this program goal, HAB granted humanitarian parole to 35 percent of the applicants under 18, a higher rate than those for other age groups and 11 percentage points higher than the overall grant rate of 24 percent. Table 8 shows the grant and denial rates by age distribution. HAB’s process for adjudicating humanitarian parole requests requires that at least two adjudicators review the application and make a recommendation to grant or deny the request. For the 8,748 applications adjudicated from October 1, 2001, through June 30, 2007, 27 adjudicators made a total of 17,963 recommendations. Our analysis of PCTS data showed that of the 17,963 recommendations, 13,480 (75 percent) were to deny the application. The grant/denial recommendation rates by adjudicators varied to some extent among adjudicators, with a denial rate of 66 to 84 percent for the 6 adjudicators with the greatest workloads, who made 15,000, or 84 percent, of all adjudicator recommendations from fiscal year 2002 through June 30, 2007. Collectively, these six adjudicators had a recommendation denial rate of 77 percent, slightly higher than the overall 75 percent recommendation denial rate for the period. Of these six adjudicators, the four who had the highest number of humanitarian parole cases-–accounting for just over 69 percent of all adjudicator recommendations—had recommendation denial rates that ranged from just over 76 percent to just under 84 percent. However, there was considerably greater variation among those who adjudicated fewer cases, with denial recommendation rates ranging from 43 percent to 93 percent of total recommendations among 18 other adjudicators who each made 15 or more recommendations, and a total of 2,957 recommendations, or 16 percent of the total, from fiscal year 2002 through June 30, 2007. Table 9 shows the approval and denial rates for all 27 adjudicators. In discussing these data with HAB officials, they noted that three factors should be taken into consideration. First, the facts and circumstances of each application varied, and it is not expected that the grant/denial recommendation rate would be the same for all adjudicators because they do not all review the same applications. Second, each adjudicator brings a different background and work experience to the position. Thus, the adjudicators might judge the facts and circumstances of the same application somewhat differently. Third, no individual adjudicator has sole authority to make the final adjudication decision. Each adjudication outcome requires at least two adjudicators’ recommendations and sometimes a “tie breaker” recommendation by a third adjudicator before a final decision is made by the HAB Branch Chief or a designee. HAB has designed internal controls to help ensure that requests for humanitarian parole are decided in a fair, equitable, and objective manner, and our review of case files and the PCTS database found that these controls have been generally effective, that is, functioning as intended. However, three areas could be strengthened to improve HAB’s ability to adhere to internal control standards. First, following HAB’s transfer from ICE to USCIS, HAB may no longer have a sufficient number of permanent staff to ensure continued compliance with its policies and procedures. Second, HAB does not have a formal training program for staff unfamiliar with humanitarian parole who may be detailed to its office to help process applications thereby increasing the risk that these adjudicators may not have the expertise to make decisions in accordance with applicable guidelines. Third, USCIS’s Web site—the primary means of communicating program criteria to potential applicants—has limited information about the circumstances under which a person may apply for humanitarian parole and therefore may be of limited use to those who seek information about the program. HAB designed internal controls to help ensure requests for humanitarian parole were decided in a fair, equitable, and objective manner and our review of case files and PCTS data found these controls were generally effective, that is, functioning as intended. For example, our standards for internal control in the federal government require that programs have policies and procedures to help ensure management’s directives are carried out. HAB has two documents—the Protocol for Humanitarian Parole Requests and the Standard Operating Procedures for Humanitarian Paroles—that provide detailed instructions on how to adjudicate and process humanitarian parole applications. The protocols list the major reasons for humanitarian parole and the factors adjudicators are to consider given the type of humanitarian parole request. For example, in considering medical requests, HAB adjudicators are to consider, among other things, the nature and severity of the medical condition for which treatment is sought and whether or not the requested treatment is available in the applicant’s home or neighboring country. Regarding family reunification, HAB adjudicators are to consider, among other things, whether the request is designed to circumvent the normal visa issuance procedures. Appendix II contains more information on factors HAB adjudicators are to consider when adjudicating humanitarian parole applications. The procedures call for two adjudicators to review each application and make a recommendation regarding whether the application should be approved or denied. Adjudicators are to provide a short summary explaining their reasoning behind their recommendation in a text box in PCTS. Should the two adjudicators disagree, a third adjudicator, or “tie-breaker,” is asked to review the application and make a recommendation. The protocols also require the HAB Branch Chief or a designee to review the application and make a final decision. According to the HAB Branch Chief, the process of having two adjudicators review each case, including a third adjudicator if needed, as well as the Branch Chief’s review and final decision on approval or disapproval, is intended to provide consistency in applying the decision criteria. The Branch Chief also told us that in difficult cases it was not uncommon for all the professional staff in the office to have an informal roundtable discussion to ensure that all the factors and complexities of the application were adequately and fairly considered. He also told us that if he decides to override adjudicators’ recommendations in a case, he does not finalize such a decision until he has first discussed the case with at least one of his two supervisors. HAB also maintains information in PCTS that is contained in the application as well as data such as the HAB adjudicator summary explanation of the case, the adjudication recommendations made by the various adjudicators, and the decision reached. The system also contains built-in checks to help ensure internal controls are followed. For example, the PCTS database will not allow a grant or denial letter to be printed unless the system contains information that two adjudicators reviewed the application, as evidenced by their having filled in the appropriate text boxes. Our review of a sample of humanitarian parole application case files and associated data in PCTS showed that HAB staff followed established policies and procedures. For example, in all cases the PCTS database showed that at least two adjudicators reviewed each application and had written an explanation in the designated text box explaining the reasoning behind their adjudication recommendation. Our direct observation of PCTS in use confirmed that the edit checks built into PCTS to ensure that all required steps are taken before an applicant grant or denial letter could be printed were working. In addition, all hard copy files we reviewed contained a letter notifying the applicants or their representative of HAB’s decision and signed by a HAB official. The letters in the files were signed by the HAB Branch Chief or a designee, indicating that supervisory review was performed. Our probability sample allowed us to conclude that this control was effective for PCTS applications in the March 1, 2007, to June 30, 2007, time period. HAB has a goal of adjudicating humanitarian parole applications within 60 to 90 calendar days, although HAB officials told us that decisions in the most urgent cases are sometimes made almost immediately. As shown in figure 3, from fiscal year 2002 through fiscal year 2006, HAB achieved this goal, with the median processing time for grants ranging from 8 to 18 days and the median time for denials ranging from 10 to 22 days in this period. Processing some applications took longer than these times, for various reasons. For example, HAB officials cited delays in obtaining the results of DNA testing to confirm a family relationship. For fiscal year 2007 through June 30, 2007, the median time to adjudicate cases increased to 53 days for grants and 36 days for denials. HAB officials told us that they had increased the number of security databases against which applicants provisionally approved for humanitarian parole are checked, prior to granting final approval. As a result, the median number of days to process applications increased in fiscal year 2007 compared to previous years. All 10 immigration attorneys we interviewed, as well as both accredited representatives of two non-profit organizations that offer legal assistance to immigrants, including sometimes helping humanitarian parole applicants, told us that they were generally satisfied with the speed of the adjudication of applications and had no complaints about the time HAB took to adjudicate their client applications. Five of the 12 attorneys and accredited representatives also told us that HAB decided their cases within 30 to 45 calendar days of the submission of the application. Ten of the 12 attorneys and accredited representatives with whom we spoke were generally satisfied with the responsiveness of the HAB staff, including their willingness to grant applicants more time to provide additional evidence to support applications for humanitarian parole. Our work showed that controls related to staffing, training, and communication with stakeholders could be strengthened to enable HAB to carry out its mission and to more fully comport with internal control standards. These areas relate to the number of HAB staff needed to ensure it continues to follow its policies and procedures, a training program for new staff not familiar with humanitarian parole and/or staff who may be detailed to the HAB to help process applications, and whether USCIS’s Web site—the primary means of communicating program criteria to potential applicants—has sufficient information about the circumstances under which a person may apply for humanitarian parole. Prior to the transfer of HAB from ICE to USCIS, HAB had 11 permanent staff, including the Branch Chief, for processing requests for both humanitarian and other types of parole. According to the HAB Branch Chief, this staffing level helped ensure that HAB (1) adhered to its policies and procedures of having two adjudicators, a third adjudicator when necessary to break ties, different adjudicators to review applications submitted for reconsideration, and supervisory review of each application; (2) performed data entry requirements; and, (3) could meet its goal of adjudicating applications within 60 to 90 calendar days. However, the memorandum of agreement that transferred the humanitarian parole program from ICE to USCIS in August 2007 provided for the reassignment of only the Branch Chief and two adjudicators to administer the humanitarian parole program. Standards for internal controls in the federal government state that an agency must have sufficient staff, including supervisors, to effectively carry out its assigned duties. Having only a Chief and two adjudicators to administer the humanitarian parole program may not be a sufficient number of staff to ensure HAB can continue to comply with its policies and procedures. For example, as noted above and according to HAB policies and procedures, two adjudicators are to review each application. Should the two adjudicators disagree, a third adjudicator, a tie-breaker, is needed to review the application and make a recommendation. The HAB Branch Chief or a designee is to review each application and make a final decision. With only two adjudicators, there is no one to act as a “tie- breaker” because the Branch Chief normally does not assume this role. In addition, if an applicant’s request for humanitarian parole is denied, he or she has the opportunity to provide additional information and have HAB reconsider the application. HAB protocols recommend that in these situations, two different adjudicators—and a third adjudicator when necessary to break a tie—review the reconsidered application. However, having only two adjudicators could put a strain on the program’s ability to continue to meets its goal of processing applications within 60 to 90 days. According to the HAB Branch Chief, based on HAB’s current workload, at least nine staff members are needed to administer the humanitarian parole program—a branch chief, a senior adjudications officer, four adjudications officers, two data entry and case management clerks to enter application information into PCTS and to create and maintain the hardcopy folders of the cases, and one case manager to respond to the 400 to 500 associated inquiries that the branch receives each year. Until permanent staff are requested, approved, and assigned, HAB plans to use adjudications officers detailed from other parts of USCIS to help adjudicate humanitarian parole applications. In addition to having a limited number of staff transferred with the humanitarian parole program, staff members who transferred were those who had relatively less experience processing humanitarian parole applications. The two permanent adjudicators now at HAB accounted for 11 percent of the cases adjudicated between October 1, 2001, and June 30, 2007. None of the top three adjudicators who decided a total of 61 percent of the cases during that period transferred to USCIS. HAB officials also told us that when the program had 11 staff (including the Branch Chief), if a backlog of cases began to develop, they could have everyone work to reduce it. With only two permanent adjudicators and the Branch Chief, HAB does not have the staff needed to address backlogs that might develop or to provide backup in the event of staff require leave for illness, training, or vacations. Although HAB plans to use detailed adjudicators as necessary, HAB officials told us that they have no formal training curriculum on how to adjudicate humanitarian parole applications. Officials told us that to date, adjudicators have come from the ranks of those who have considerable experience in immigration-related issues and that this enabled adjudicators to know how to adjudicate humanitarian parole applications after brief on-the-job instruction. Officials also told us that they intend to develop training curriculum on adjudication of humanitarian parole cases. Internal control standards in the federal government state that providing formal training is a method by which an agency can address expertise and experience issues. Until a training program is in place, staff detailed to HAB and new permanent staff not familiar with adjudicating humanitarian parole applications may not get the training they need. Having untrained staff increases the risk that they may not have the expertise to make humanitarian parole decisions in accordance with applicable guidelines. Internal control standards in the federal government state that agencies should establish open and effective communications channels with customers and other groups that can provide significant input on agency products and services. This is particularly important with respect to humanitarian parole applications where the applicant pays a $305 fee for a government service. Our standards for internal controls offer guidelines for communication between an agency and both its internal and external customers. These guidelines state that an agency should provide sufficient information so that clients can understand the rules and processes and can make effective use of the services the agency is supposed to offer. However, those seeking humanitarian parole may not fully understand the rules for applying. As noted earlier in this report, an estimated 57 percent of those denied humanitarian parole were denied, in whole or in part, because the requester had not exhausted alternative immigration processes, such as requesting a visa, a process that generally must be used prior to requesting a humanitarian parole visa absent urgent circumstances that made it impractical to do so. We also found that an estimated 13 percent of those denied humanitarian parole had committed an infraction of immigration law or other crime, which is also generally a disqualifying factor. USCIS uses its Web site as the primary tool to communicate information about the humanitarian parole process to the public. The U.S. Department of Health and Human Services has developed Research Based Web Design and Usability Guidelines. The 2006 guidelines state that Web sites should be designed to facilitate effective human-computer interaction and that if the content of the Web site does not provide the information needed by users, it will provide little value no matter how easy it is to use. The instructions included on the USCIS Web site for how and under what circumstances to apply for humanitarian parole were limited. For example, the Web site does not state that to be eligible for humanitarian parole, applicants must generally have first exhausted other available avenues of relief, other than in circumstances of compelling humanitarian emergency or when urgency makes it impractical to do so. The instructions state that the applicant is to submit a statement on “why a U.S. visa cannot be obtained instead of having to apply for humanitarian parole” but does not state that an application for a visa generally should have been made and rejected, again absent urgent circumstances that make it impractical to do so. Further, the written instructions may be confusing to some applicants. For example, the instructions state that “anyone can file an application for humanitarian parole,” including “the prospective parolee, a sponsoring relative, an attorney, or any other interested individual or organization.” While technically true, the language could lead persons to file and pay the $305 application fee when they first should have exhausted other immigration alternatives (such as filing for a visa), except when there are circumstances that constitute an emergency. This potential lack of information about the need for most applicants to first exhaust other immigration alternatives, absent an emergency, leaves open the possibility that some applicants might not realize that they generally have to have been denied a visa to request humanitarian parole. As a result, applicants may be losing time, as well as the $305 application fee required to apply for humanitarian parole. In addition, HAB’s workload could be increased unnecessarily, therefore putting additional strains on its limited staff. Although HAB has extensive protocols on what to consider when adjudicating humanitarian parole applications, there is little information on USCIS’s Web site regarding what HAB considers when adjudicating these applications and finding such information can be difficult. Six of the 12 attorneys and accredited representatives we interviewed said that they and their clients would have benefited from more guidance on the application process, including an explanation of what supporting documentation and evidence to include in the application, adjudication criteria, and examples of circumstances warranting humanitarian parole. Clearer and more explicit information about the humanitarian parole process could better inform potential applicants and their attorneys and representatives. Six of 12 attorneys and accredited representatives stated that having either a phone number or an e-mail address on the Web site to contact HAB would help facilitate communication. Two attorneys suggested that using e-mail could speed correspondence with HAB as well as the submission of application materials. Four attorneys who had represented clients who were denied parole told us that HAB should include more information on the grounds for denial in the decision letter. Specifically, four of the seven attorneys who had at least one client denied parole were dissatisfied with the brief form language included with the notification letter. Two attorneys stated that the brief letters gave them the impression that the applications had not received sufficient or serious consideration. HAB officials, however, expressed concern that providing detailed explanations of denials would lead to reapplications of denials tailored to overcome the original grounds for the denial, even when the underlying facts of the case had not changed. This, in their view, would increase the number of potentially frivolous applications and add to the agency’s overall workload from persons who were ineligible for humanitarian parole—and slow down the processing times for genuinely urgent cases. Finally, two attorneys who received approvals for their clients stated they would have appreciated clearer instructions about how to obtain the necessary travel documents from an embassy or consulate. HAB officials told us that the letters they provide to applicants or to their representatives can at most tell them whether they have been granted or denied parole and, if granted, which embassy or consulate they need to contact to obtain the travel documents. The officials stated that information regarding embassy and consulate locations and hours of operations is available on the Department of State Web site at www.travel.state.gov. HAB has instituted internal controls that are designed to help ensure that humanitarian parole applications are decided in a fair, equitable, and objective manner, and these controls were generally effective, that is, functioning as intended. With the move to USCIS resulting in the transfer of only the HAB Branch Chief and two permanent adjudicators, HAB does not have sufficient staff for two independent reviews of an application and a possible tie breaker—a key internal control mechanism. Until an adequate staffing level is decided and implemented, HAB may face challenges in adhering to its policies and procedures on adjudication. Without a formal training program for potential new staff and those who might be detailed to HAB, the agency cannot ensure that these staff will be properly trained to make recommendations in accordance with applicable guidelines. Lastly, additional information on USCIS’s Web site about the need for applicants to first exhaust other immigration avenues before applying for humanitarian parole and more information about the criteria HAB uses to adjudicate humanitarian parole applications could help applicants decide whether the expenditure of time and the $305 application fee would be appropriate and what types of evidence are needed to help ensure HAB makes an informed decision. Without this additional information, applicants may lose time and money applying for humanitarian parole and HAB’s workload may be increased unnecessarily, straining its already limited staff. To help ensure that HAB is able to process applications for humanitarian parole consistent with its own policies and procedures and to help ensure applicants understand the humanitarian parole rules and processes, we recommend that the Secretary of DHS direct the Director of USCIS to take the following three actions coordinate with the HAB Branch Chief to determine the number of staff HAB needs to process humanitarian parole applications in accordance with its policies and procedures and assign them to HAB; develop a formal training program curriculum on adjudication of humanitarian parole cases for new and detailed staff; and revise USCIS’s Web site instructions for humanitarian parole to help ensure that applicants understand the need to first exhaust all other immigration avenues and the criteria HAB uses to adjudicate humanitarian parole applications. We provided a copy of a draft of this report to DHS for comment. In commenting on our draft report, DHS stated that it concurred with our recommendations and that it has begun taking actions to implement each of them. DHS stated that the HAB is finalizing a comprehensive staffing assessment for review by USCIS and that, in the short-term, HAB has made interim arrangements to have experienced USCIS staff assist its staff. DHS stated that USCIS intends to implement a formal humanitarian parole training program during fiscal year 2008 and that the program would offer an orientation process for all staff members responsible for processing humanitarian parole applications. Last, DHS stated that USCIS will undertake a thorough review of the Web site and make appropriate modifications, including but not limited to the development of a frequently-asked-questions section, and that these modifications would be implemented during fiscal year 2008. We are sending copies of this report to the Secretary of Homeland Security, the Secretary of State, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8777 or by e-mail at stanar@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This report addresses U.S. Citizenship and Immigration Service’s (USCIS) Humanitarian Assistance Branch’s (HAB) policies and procedures for adjudicating applications for humanitarian parole. Specifically, we answered the following questions: (1) What are the characteristics of those who applied for and were either granted or denied humanitarian parole since fiscal year 2002 and did approval and denial rates differ according to these characteristics or the adjudicator assigned? (2) What internal controls has HAB designed to adjudicate humanitarian parole applications and to what extent did HAB adhere to these internal controls when processing humanitarian parole applications? We performed our work at HAB’s office in Washington, D.C. To determine the characteristics of those who applied for and were either granted or denied humanitarian parole since fiscal year 2002 and what differences, if any, there were in grant or denial rates according to these characteristics or the adjudicator assigned, we obtained and analyzed data from DHS’s Parole Case Tracking System (PCTS), a database that contains computerized records of all individuals whose applications for humanitarian parole have been approved, denied, suspended, terminated, or are pending. We analyzed the data on the 8,748 cases that were either approved or denied from October 1, 2001, through June 30, 2007, the cutoff date necessary to ensure that the cases under review had been fully adjudicated and closed. PCTS is a database that was carried over from the (former) Immigration and Naturalization Service (INS) to DHS, when the latter was formed and absorbed the INS. The PCTS is now maintained by HAB. PCTS contains no interfaces to any external computer or communication systems. To determine the reliability of PCTS data, we compared the data in PCTS with the information contained in a sample of hard-copy humanitarian parole applications. While the HAB keeps indefinitely humanitarian parole applications that were approved, the HAB only keeps for 6 months those that were denied. Therefore, to include both approvals and denials in our sample, we selected a stratified probability sample of 145 cases from the 544 cases that were either approved or denied from March 1, 2007, through June 30, 2007, to evaluate data reliability for this period. The results of our data verification were as follows: We sampled 74 denied cases from the population of 378 denied cases and found no errors. We sampled 71 granted cases from the population of 166 granted cases during this period and found no errors. Because we found no instances of error between the data in PCTS and the underlying hard-copy applications, we are 95 percent confident that the frequency of these errors would be less than 4 percent for both the granted and the denied cases for the time period we reviewed. Therefore, we consider the results of our analyses using data from DHS’s PCTS to yield accurate representations of the distribution of humanitarian parole grant and denial decisions by applicant characteristics and by adjudicator. We also consider the results of our analyses using PCTS data to yield accurate representations of time frames for adjudicating humanitarian parole applications and of reasons for denial of humanitarian parole applications. We performed comprehensive analyses on PCTS data covering the period from October 1, 2001 through June 30, 2007. Our analyses included the distribution of humanitarian parole grant and denial decisions by applicant age, gender, and country of residence; distribution of grant and denial decisions by reason for request and distribution of grant and denial recommendations by adjudicator; and time frames required for adjudication (calendar days). Specifically, we summarized data on the number of applications approved or denied humanitarian parole from October 1, 2001, through June 30, 2007. To determine whether there were any differences in the demographic characteristics among those granted or denied humanitarian parole, we analyzed key demographic characteristics of the applicants (i.e., age, gender, and country of origin). We also examined whether there were any differences in the approval and denial rates between specific adjudicators To examine the reasons for requesting humanitarian parole and the reasons for which applicants were denied, we selected a stratified probability sample of 462 cases from fiscal year 2002 through June 30, 2007, and performed content analyses on these cases. The sample strata were defined in terms of time period and whether the request was denied or granted. Table 10 summarizes the population of humanitarian parole cases and our sample selected for the content analyses. We performed a content analysis of the reasons for the requests contained in the text boxes on all 462 applications. We then categorized the explanations in the text boxes for requesting humanitarian parole into four major categories: (1) life-threatening medical emergencies; (2) family reunification for compelling humanitarian reasons; (3) emergent, defined by the HAB guidelines as including the need to visit an ill family member, or to resolve matters associated with the death of a relative, or to attend a funeral; and (4) “other,” such as a caregiver needed to care for someone in the United States. These categories are in the protocols that HAB adjudicators use in making their recommendations. We confirmed these categories with HAB. To determine the reasons for which applicants were denied humanitarian parole, we reviewed the 280 cases in our sample in which the applicant was denied humanitarian parole and performed a content analysis of the explanations for denial of parole contained in the text boxes. We then categorized the explanations for denials contained in these text boxes into 10 categories. HAB officials agreed that these 10 categories represented the reasons for denial. They noted that because their decisions are discretionary, none of these reasons are in and of themselves automatically disqualifying. Rather, these are the reasons cited in the text boxes found in the probability sample as the basis of the reasoning by the HAB adjudicators as leading to their denial recommendation. The 10 categories were: The applicant had not exhausted alternative immigration processes available to them and for which they might have been eligible, such as obtaining a visa, absent urgent circumstances that made it impractical to do so. The applicant provided no evidence supporting an emergent condition, such as a death certificate in the case where the request was to attend a funeral. The applicant provided no or inadequate evidence to support the reason for the request for humanitarian parole, such as a claimed medical emergency. The applicant had committed a prior immigration violation or other criminal violation. The purpose of the parole was not temporary in nature. That is, HAB believed that the applicant intended to stay in the United States beyond the duration of a parole period. Other family members already in the United States could provide care to the person intended to benefit from the presence of the applicant. The needed medical treatment was available outside the United States. There was insufficient evidence of adequate financial support to prevent the applicant from becoming a public charge while in the United States. The applicant provided no proof of familial relationship in cases where a family relationship was claimed as the basis of the application. Other: This was for applications that did not fall into the other categories. For example, other cases included when a person already approved for humanitarian parole mistakenly applied to HAB for an extension of the parole period rather than apply with a local USCIS district office. Another example was when an applicant for lawful permanent residency left the United States without first obtaining the needed permission from USCIS and then applied for humanitarian parole to re-enter the United States, a situation that is not valid grounds for humanitarian parole. In recommending that an application be denied, adjudicators sometimes cited more than one reason in explaining their recommendation. Therefore, we counted all reasons cited by the adjudicators in the PCTS text boxes. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 8 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. For example, we estimate that 49 percent of requests were for family reunification for compelling humanitarian reasons, so we are 95 percent that for the entire population of requests, between 41 and 57 percent of the time family reunification for compelling humanitarian reasons was the reason for requesting humanitarian parole. Estimates from this sample are to the population of humanitarian parole cases processed by DHS (or its precursor, the Immigration and Naturalization Service) from October 1, 2001, through June 30, 2007. The 8,748 applications contained in the PCTS data through June 30, 2007, provided by DHS represent 100 percent of the application cases either granted or denied within the Humanitarian Parole program at the time of our analysis. To determine what internal controls HAB designed to adjudicate humanitarian parole applications and to what extent HAB adhered to these internal controls when processing humanitarian parole applications, we obtained HAB policies and procedures and compared them with standards for internal control in the federal government and other internal control guidance related to control activities, staffing levels, training, and communication with external clients. In assessing the adequacy of internal controls, we used the criteria in GAO’s Standards for Internal Control in the Federal Government, GAO/AIMD 00-21.3.1, dated November 1999. These standards, issued pursuant to the requirements of the Federal Managers’ Financial Integrity Act of 1982 (FMFIA), provide the overall framework for establishing and maintaining internal control in the federal government. Also pursuant to FMFIA, the Office of Management and Budget issued Circular A-123, revised December 21, 2004, to provide the specific requirements for assessing the reporting on internal controls. Internal control standards and the definition of internal control in Circular A-123 are based on GAO’s Standards for Internal Control in the Federal Government. We also used the guidance contained in Internal Control Management and Evaluation Tool, GAO-01-1008G, dated August 2001. In addition, we tested compliance with two internal controls—that at least two adjudicators reviewed each case, and that a signature of the HAB Branch Chief or a designee existed—for each of the 145 cases selected for our validation sample. From this review, we found no instances of noncompliance with the internal controls. This means that we are 95 percent confident the frequency of this type of noncompliance would be less than 4 percent for the both the granted and the denied cases for the time period we reviewed. Based on this review, we concluded that these internal controls are effective. To obtain a more complete understanding of the humanitarian parole process, we interviewed accredited representatives (non-attorneys accredited to represent aliens before immigration courts) of 2 non-profit groups that have handled humanitarian parole cases—Catholic Charities USA and the Hebrew Immigrant Aid Society (HIAS)—as well as 10 private attorneys who are members of the American Immigration Lawyers Association (AILA). The 12 individuals we interviewed collectively had assisted with 20 humanitarian parole applications since 2000. We asked each of these individuals a similar set of questions about their experiences with the application process. Additionally, we asked them to describe aspects of that process that worked well and to identify areas where they felt it could be improved. Because these individuals and groups were selected using nonprobabilistic methods, conclusions drawn from these interviews cannot be generalized to the immigration law community. The HAB has a protocol document that states in general, that HAB looks at the totality of the circumstances when reviewing requests for humanitarian parole. The protocol also describes broad reasons for humanitarian parole applications and lists factors within these that HAB may consider in determining parole eligibility. According to its protocols, HAB determines whether the reasons given in the requests are urgent or an emergency compared to other seemingly similar requests. The following information does not constitute a comprehensive list of factors included in the protocol, but rather provides examples of the types of factors HAB considers. Medical Requests: In considering medical requests, HAB adjudicators are to carefully review the application, supporting documentation, and other resources to determine among other factors the nature and severity of the medical condition for which treatment is sought; whether or not the requested treatment is available in the home or neighboring country; and the medical verification of the need of the prospective parolee. Family Reunification: Regarding family reunification, HAB will consider many elements, such as whether the request is designed to circumvent the normal visa issuance evidence of a bona fide relationship between the applicant and claimed relatives in the United States; and the age and mental and/or physical limitations of the family member who is seeking to be paroled into the United States. “Emergent” requests: Emergent conditions that the HAB considers include: humanitarian situations, such as visiting dying family member; the need to attend a funeral; or resolution of matters associated with the death of a family member. In addition, according to PHAB protocols, the agency considers evidence of a bona fide relationship; Medical documentation supporting the prognosis of the family member, or death certificate (when a relative has died); and whether there are no other next of kin residing in the United States who can provide emotional support or settle an estate. Other Humanitarian Requests: Humanitarian parole is a discretionary decision that inherently permits the HAB to consider any circumstances brought to its attention by the applicant. HAB protocols note that while every situation is “emergent” to the applicant and/or sponsor, many requests for humanitarian parole are for the convenience of the applicant and/or sponsor. In addition to the contact listed above, Michael P. Dino, Assistant Director; David P. Alexander; Richard J. Ascarate; Frances Cook; Michelle Cooper; Shawn Mongin; Mark Ramage; Jerome T. Sandau; John G. Smale, Jr.; Jonathan R. Tumin; and Derek Updegraff made key contributions to this report.
The Immigration and Nationality Act requires that most visitors and immigrants to the United States obtain a visa. Aliens unable to obtain a visa, and with a compelling humanitarian need, may apply to the Department of Homeland Security (DHS) to be granted humanitarian parole. This permits an alien to enter the United States on a temporary basis. Parole responsibility rests with DHS's Humanitarian Assistance Branch (HAB), which was transferred to the U.S. Citizenship and Immigration Services (USCIS) in August 2007. In response to congressional requesters, GAO examined (1) the characteristics of those who applied for humanitarian parole since October 1, 2001, and (2) internal controls HAB designed to adjudicate applications along with the extent to which HAB adhered to them. To conduct this work, GAO analyzed HAB documents and data, such as its protocols and database of all parole applications since October 1, 2001; interviewed HAB officials about adjudication processes; and interviewed attorneys who had helped individuals file for parole. The 8,748 humanitarian parole applications that HAB adjudicated from October 1, 2001, through June 30, 2007, displayed various characteristics--54 percent of the applicants were female and 46 percent, male; 45 percent of the applicants came from 11 countries, with the largest number from Mexico. Sixty-four percent of the requests for humanitarian parole were for family reunification or medical emergency. Persons under age 18 had a 35 percent grant rate--higher than the rate for applicants over 18 and consistent with the stated purposes of humanitarian parole. Seventy-six percent of applications were denied; 24 percent were granted. Among multiple reasons cited for denial by adjudicators in a projectible sample of cases we analyzed, an estimated 57 percent of applicants had not exhausted other avenues of immigration available to them before applying for humanitarian parole, as generally is required. Data analysis revealed few differences in parole denial rates with regard to gender or, with two exceptions, country of residence. While denial recommendation rates for individual adjudicators varied, HAB officials stated that this is expected because the facts and circumstances of cases vary and adjudicators have different backgrounds and experiences that might affect their reviews of an application. HAB has designed internal controls to help ensure that requests for humanitarian parole are decided in accordance with applicable guidelines; these controls have been functioning as intended. Specifically, HAB has, among other controls, clear and detailed written policies and procedures, including a requirement that every application be reviewed by two adjudicators and that if they disagree, a third is to make a "tie-breaking" recommendation. A final decision is then made by the HAB Branch Chief or a designee, but if the Branch Chief decides to override the adjudicators' recommendations, the case is first discussed with higher-level officials. A computerized data system also records key information in every case. While HAB's controls are generally effective, three areas can be strengthened. First, following a transfer of HAB to USCIS, HAB may no longer have a sufficient number of permanent staff to ensure it continues to follow policies and procedures, since two adjudicators are insufficient to provide independent reviews of requests for reconsideration--HAB guidance recommends that such requests be reviewed by two additional adjudicators not previously involved. Second, HAB does not have a formal training program for new staff who may be detailed to help process applications. Such training is essential to ensure that criteria for granting and denying parole are applied consistently and fairly by the adjudicators. Third, USCIS's Web site has limited information about the circumstances under which a person may apply for humanitarian parole. More information and clearer instructions could reduce the number of applications from those who had not taken the steps generally required before applying for humanitarian parole, such as exhausting other available avenues for entry into the United States.
In the United States, although drugs are classified as prescription or nonprescription at the federal level by FDA, the practice of pharmacy is typically regulated by states. For example, states license pharmacists and enforce pharmacists’ continuing education requirements. The 1951 Durham-Humphrey Amendments to the Federal Food, Drug, and Cosmetic Act provided the statutory basis for the two-tier drug classification system that currently exists in the United States. Since that time, there have been a number of proposals to introduce a third category of drugs in the United States. This category has been called by a number of names, including pharmacist-legend, pharmacist-only, third class of drugs, and BTC. Although there is some variation between proposals, the basic idea is the same: a class of drugs would be established that would be available without a prescription, but only in pharmacies. The BTC idea that FDA sought comment on would require that these drugs be sold only in pharmacies, and that a pharmacist’s intervention with a consumer occur before the drug could be dispensed. There are two general views on how a BTC class of drugs would be used in the United States. The first is that BTC would be a permanent class. It would be similar to the current prescription and OTC classes, in that drugs would be placed in the BTC class with no expectation that they would eventually switch to the prescription or OTC class. Drugs in the BTC class would be those determined by FDA to be nonprescription but would require the intervention of a pharmacist. Drugs in the BTC class could come from the current prescription and OTC classes or new drugs could be classified as BTC, although proposals for a BTC drug class generally seek to increase access to medications by switching drugs out of the prescription class. The second view is that the BTC drug class could function as a transition class for some drugs and a permanent class for others. A drug being switched from prescription to nonprescription would spend time in the transition class, during which the suitability of the drug for OTC status could be assessed. In addition to studies specifically designed for such an assessment, consumer use of the drug as a prescription drug and as a BTC drug could be examined. The argument is that this would provide a better picture of how the drug would be used by the public if it were available as an OTC product. Information that could be gathered while the drug was in the transition class includes types and levels of misuse among the general public, incidents of adverse drug reactions, and interactions with other medications. At some point after the product has been BTC, a decision might be made based on the available data to switch the drug to OTC, return the drug to prescription status, keep the drug in the BTC class for future study, or keep the drug in the BTC class with no expectation that it would eventually be switched to the prescription or OTC class. FDA has not indicated which drugs might be classified as BTC in the United States. However, among the drugs suggested by some proponents are certain drugs that treat chronic conditions such as high cholesterol, asthma, high blood pressure, diabetes, urinary incontinence, and osteoporosis. Vaccines; the epinephrine auto-injector used in emergency situations following insect bites, stings, or exposure to other allergens; and oseltamivir—which is used to treat influenza and might be effective in the event of an influenza pandemic—have also been suggested as possible BTC products. More generally, drugs that are subject to abuse and drugs that are to be sold only to consumers of a minimum age have been mentioned as possible candidates for a BTC class. Figure 1 defines the terms we use to describe the drug classes in the United States and other countries and how the levels of restriction vary among classes based on the conditions under which drugs are sold. As discussed in our previous report, varying levels of restriction on nonprescription drugs already exist in other countries. Among the criteria foreign countries have used for switching a drug from prescription to a less restrictive nonprescription drug class are: (1) the symptoms or circumstances for use of the drug are suitable for self-medication, including self-diagnosis, with the intervention of a pharmacist; and (2) the drug has a low potential for side effects or overdose, and intervention by a pharmacist could minimize these risks. In contrast, nonprescription drugs in the United States generally have these characteristics: (1) their benefits outweigh their risks; (2) consumers can use them for self-diagnosed conditions; (3) they can be adequately labeled for self-medication; and (4) a prescription by a licensed prescriber is not needed for the consumer to safely and effectively use the drug and the conditions or symptoms are generally self-limiting. Appendix II provides details on the drug classification systems in each study country and the European Union (EU). While Figure 1 indicates how the levels of restriction for prescription and nonprescription drug classes affect drug availability, there are other factors that can also affect availability including cost, patient participation in health decisions, and purchase site convenience. For example, the number of pharmacies in a country affects the availability of BTC drugs. The more pharmacies there are, the greater the availability of BTC drugs and the smaller the difference in availability between BTC and OTC drugs. Also, the distribution of pharmacies can affect availability. Areas without a local pharmacy but with outlets that sell OTC drugs, would be more affected by not having drugs available OTC than would areas with nearby pharmacies. Arguments that have been made supporting and opposing a BTC drug class are generally based on public health or cost considerations and reflect disagreement on the likely consequences of the establishment of such a class. Many of the arguments are concerned with how a BTC drug class might affect consumers’ access to medications, pharmacist involvement in selecting drugs, the costs of drugs, and payment policies. Some of those who support a BTC drug class, including representatives of pharmacist associations and some academics, suggest that such a class would lead to improved public health through increased availability of nonprescription drugs. Proponents of a BTC drug class argue that such a class would increase access because drugs that might not otherwise be suitable for general OTC use could be available without a prescription. The switching of a drug from prescription to OTC represents a large change in the distribution of the drug, from requiring a prescription to requiring no medical intervention at all. Proponents argue that pharmacists could help bridge this gap if there were a BTC drug class. By providing a new avenue for switches from prescription to nonprescription, a BTC drug class would give consumers access to more drugs that could benefit their health. Pharmacists could counsel consumers on BTC medications and, consequently, some drugs that were unsuitable for OTC availability could be made available as BTC drugs. Proponents argue that this would be particularly important for underserved populations, such as the uninsured, underinsured, or those with limited access to a primary care provider and, thus, to prescription drugs. Moreover, an FDA official told us that many of the drugs that could be switched to OTC under the current two-tier drug classification system have already been reclassified and that a BTC drug class might allow additional drugs to be switched out of prescription status.. The convenience of acquiring BTC drugs at a pharmacy could improve consumer adherence to drug regimens by eliminating the need for a visit to a physician to obtain refill prescriptions. Additionally, FDA has noted that people are now taking a larger role in managing their health. Experts have stated that increased access to drugs through a BTC class could give them even more tools to do so, thus potentially improving their health. Other arguments in favor of a BTC drug class focus on the expanded role of pharmacists under such a class, suggesting that greater use of pharmacist expertise would improve health outcomes. Proponents of a BTC drug class note that pharmacists are successfully engaging in activities beyond their traditional role of dispensing drugs, such as prescribing drugs under certain circumstances or reviewing individuals’ drug regimens if they participate as providers of medication therapy management (MTM) in programs where they are authorized to perform such reviews. Proponents also point out that pharmacists are well trained in medication therapy, and a BTC drug class would make better use of pharmacists’ knowledge of drug use, drug interactions, and other factors. Additionally, pharmacy schools are becoming more patient focused, integrating training on counseling, physical assessments of patients, and interpretation of lab results into their curricula. Because pharmacists might be more accessible than physicians, better health outcomes could result from the greater consumer interaction with pharmacists brought on by a BTC drug class. During such interactions, pharmacists might also refer individuals with potentially serious medical conditions to a physician; these individuals might not have otherwise entered the health care system. Moreover, proponents of a BTC class note that numerous studies have demonstrated that expanded pharmacist roles in individuals’ care can result in health improvements. They note that the pharmacy practice literature generally supports the ability of community pharmacists to reduce adverse reactions and improve clinical outcomes for conditions such as asthma, diabetes, hypertension, and high cholesterol. Proponents also argue that a BTC drug class would improve public health by permitting additional data to be obtained that would better indicate when a drug would be appropriate for OTC availability. For example, BTC availability might allow consumer–pharmacist interactions to be studied to determine if consumers really need the pharmacist’s input. Additionally, information might be collected from pharmacists about whether consumers could understand product information and appropriately assess their suitability for a medication without pharmacist prompting. This could affect the labeling if the drug were switched to OTC availability. Depending on a drug’s safety and usage profile in a BTC class, a drug could either remain permanently in the BTC class or subsequently transition to OTC. Opponents of a BTC drug class, including some academics and representatives of drug manufacturers, raise concerns that such a class could harm public health by decreasing the availability of nonprescription drugs. Overall, opponents believe that the current two-tier drug classification system works well and provides consumers with an appropriate level of drug availability. Opponents of a BTC drug class argue that such a class could become the default option for drugs being switched from prescription status due to the cautious approach of regulators. Prescription drugs that could have switched to OTC might instead be placed into a BTC drug class, resulting in decreased consumer access compared to OTC availability. Drugs might also remain in a BTC drug class even if suitable for OTC use. Concerns have also been raised that current OTC products could be moved into a BTC class, thereby reducing availability. Additionally, depending on how well information is communicated to consumers about a BTC drug class, both in public campaigns and within pharmacies, consumers could be unaware of available BTC drugs. Underserved and rural communities with few or no pharmacies might also experience barriers to accessing BTC drugs, which would only be available through pharmacies. Opponents also raise concerns about the potential harm that might be done to consumers if pharmacists are not able to provide high-quality, reliable BTC services. Physician association representatives and others have stated that pharmacists lack adequate clinical training to properly diagnose and treat illnesses, skills which might be required when dispensing BTC drugs. Opponents also raise the concern that pharmacists are very busy and might not have enough time to provide individualized counseling to consumers regarding BTC drugs. Additionally, pharmacists might not have access to relevant information (e.g., a complete medical record, laboratory results, and a complete list of medications taken by the individual) necessary to make an optimal and safe BTC drug recommendation. Opponents also argue that, currently, pharmacists counsel infrequently and sometimes incorrectly. Beyond concerns over inadequate service, opponents suggest that a lack of private confidential areas in pharmacies for consumer–pharmacist interactions could discourage individuals from seeking care for sensitive matters. Some opponents of a BTC drug class assert that adverse health outcomes could result from improper use of BTC drugs. Individuals who use BTC drugs without consulting a physician might treat symptoms but not the underlying cause of the illness, thus delaying appropriate therapy. Readily available BTC drugs could also encourage individuals with chronic conditions to seek pharmaceutical remedies instead of lifestyle changes that could alleviate the conditions. Additionally, an individual’s personal physician might not be aware when a person begins a pharmacist- recommended BTC drug regimen and thus might not be able to monitor the individual appropriately. Experts told us this uncoordinated care could further fragment the provision of health care. Proponents of a BTC class have argued that establishment of such a class would likely reduce costs. In the past, the price of a drug has decreased when it was switched from prescription to OTC. Consequently, if a BTC drug class permits increased switching of drugs and pricing follows this pattern, it could reduce costs to consumers and to the overall health care system. Cost savings could also result from a decrease in the number of physicians’ visits. The availability of BTC drugs that previously had prescription status could result in fewer physician office visits for patients seeking prescriptions and, accordingly, fewer co-payments and third-party reimbursements to physicians. This would reduce costs for both consumers and insurers, as well as overall health care system expenditures. The pharmacy practice literature also supports the ability of community pharmacists to provide cost-effective interventions and reduce the cost of drug therapy. Additionally, because third-party payers do not typically reimburse consumers for nonprescription drugs and thus might not provide coverage for BTC drugs, drug expenditures for third-party payers could decrease if prescription drugs were switched to a BTC class. Cost reductions for insurers could also be realized in the area of compensation for professional services. Although pharmacist associations maintain that pharmacists would need to be compensated for health care services provided under a BTC paradigm, health services provided by pharmacists are less expensive than those provided by physicians—pharmacists are reimbursed at approximately 80 percent of physician rates for similar time- based services. Many arguments against a BTC drug class are based on the potential increased costs to individuals, third-party payers, and the overall health care system that such a class might cause. For instance, because insurers do not typically reimburse consumers for OTC drugs and thus might not provide coverage for BTC drugs, out-of-pocket expenses for consumers could increase if prescription drugs were switched to a BTC drug class and if the cost of the BTC product were greater than the previous copay. Opponents of a BTC drug class have argued also that costs could increase as the result of the pharmacy services required for establishing such a class. Compensation for pharmacists providing BTC services could result in greater costs for consumers and third-party payers than if the drugs had been made OTC in the current two-tier system. Furthermore, restricted competition could also increase costs. It has been noted that there would be fewer outlets for BTC drugs than for OTC products because BTC products could not be sold at retail outlets other than pharmacies. This reduced availability could adversely affect retail competition and, as a result, drive up prices. Additionally, improper use of BTC drugs and the absence of physician consultations in the BTC process could result in expensive adverse health outcomes. For example, without a physician’s diagnosis, a pharmacist might recommend a BTC drug to treat stomach pain. However, potentially serious gastrointestinal problems might underlie this symptom, and delays in obtaining appropriate treatment could have serious and expensive consequences to the consumer and the health care system as a whole. All five study countries have increased nonprescription drug availability since 1995; however, the impact of restricted nonprescription drug classes on availability is unclear. The five study countries increased drug availability in two ways: by changing nonprescription drug classes or by switching some drugs into less restrictive classes. Italy and the Netherlands established new OTC classes by making some or all nonprescription drugs available for sale at nonpharmacy outlets, while Australia, the United Kingdom, and the United States switched a number of drugs from more restrictive to less restrictive drug classes. When we compared the classification of 86 selected drugs in the five study countries, we found that the impact of restricted nonprescription drug classes on availability is unclear. The United States required a prescription for more of the selected drugs than did the two study countries (Australia and the United Kingdom) with a BTC drug class but also had more of these drugs classified as OTC—the option that provides greatest availability— than the other four study countries. Consumers in all five study countries have experienced an increase in nonprescription drug availability compared to 1995 due to changes in drug classes or reclassification of drugs into less restrictive classes. Two countries changed their drug classes. The Netherlands added an OTC class in 2007; previously, all nonprescription drugs in the Netherlands were restricted to pharmacy or drugstore sales. As a result, the Netherlands now has three nonprescription drug classes: pharmacy, drugstore, and OTC. Italy also relaxed nonprescription drug sale restrictions in 2006 by making all nonprescription drugs available in nonpharmacy outlets; previously, nonprescription drugs could be sold only in pharmacies. Italy requires that a pharmacist be on the premises in any outlet that sells nonprescription drugs. As a result, Italy’s single nonprescription drug class has changed from a pharmacy class to an OTC/pharmacist class. The presence of a pharmacist is not a requirement for the OTC class in any of the other countries we examined. Due to the changes made by Italy and the Netherlands, all five of the study countries now have some form of OTC availability of drugs. The other three study countries made no changes to their drug distribution categories since 1995. Australia has three nonprescription drug classes: BTC, pharmacy, and OTC. The United Kingdom has two nonprescription classes: BTC and OTC. The United States has one nonprescription class: OTC. (Table 1 summarizes the drug classes in use in the study countries in 2008. Appendix II provides more details on the drug classification systems in each of the study countries.) Australia, the United Kingdom, and the United States have increased drug availability since 1995 by switching certain drugs from more restrictive to less restrictive drug classes. For example, the United States switched 31 drugs—including nonsedating antihistamines, orlistat (a weight-loss aid), and levonorgestrel (an emergency contraceptive switched for consumers aged 18 and above)—from prescription to nonprescription status during this period; there were no switches from nonprescription to prescription status. Australia approved more than six times as many drug switches as the United States—193—to less restrictive classes in the same period. Australia does not require drugs to switch in a stepwise manner; for example, 28 percent of switches approved from 1995 to 2008—54 out of 193 switches—bypassed an intermediate class in favor of a less restrictive class (e.g., bypassing BTC when switching from prescription to pharmacy status). During this same period, an additional 67 drug switches resulted in more restrictive classification (e.g., from pharmacy to prescription). The United Kingdom also switched drugs from more restrictive to less restrictive classes, approving more than 50 switches from prescription to BTC or BTC to OTC between 1995 and 2008. Among the switches approved were two that were the first of their kind for any country: the 2004 switch of a cholesterol-lowering statin—simvastatin—from prescription to BTC status and the 2008 switch of an antibiotic— azithromycin for treatment of chlamydia—to BTC status. In 2002, the United Kingdom began exploring ways to increase the number of drugs available without a prescription. As part of this process, the United Kingdom has changed its approach to nonprescription switches from a focus on switching drugs for short-term conditions to include drugs for chronic conditions. The United Kingdom uses a stepwise process in which drugs leaving prescription status are given BTC status for several years before they are considered for OTC sale. Thus the BTC drug class in the United Kingdom can serve as a transition class. It is unclear whether the presence of restricted nonprescription drug classes increases drug availability. The United States required a prescription for more of the drugs we examined than did the two study countries—Australia and the United Kingdom—using a BTC drug class in addition to other nonprescription drug classes. When we compared the classification status of 86 selected drugs in the five study countries, the United States required a prescription for 42 drugs while Australia and the United Kingdom each required a prescription for 23 of the drugs (see table 2). The United States had slightly more of the selected drugs available without a prescription than the two study countries—Italy and the Netherlands—that did not use a BTC drug class. (See app. III for further details on classification of these 86 drugs in the study countries.) However, the United States had more of the 86 selected drugs classified as OTC—the option that provides greatest availability of these drugs for consumers—than all other study countries. With the exception of levonorgestrel (an emergency contraceptive), all nonprescription drugs (43 drugs, or 98 percent) were OTC in the United States without any restrictions. In contrast, 54 to 100 percent of nonprescription drugs in the other four study countries had conditions placed on their sale that restricted their availability. These restrictions included limiting sale to pharmacies and requiring pharmacist involvement in the sale (Australia and the United Kingdom), limiting sales to pharmacies and drugstores (the Netherlands), or requiring a pharmacist to be on the premises at any retail outlet selling nonprescription drugs (Italy). Therefore, an assessment of the restrictiveness of the drug distribution system in the United States compared to the other countries studied depends on the definition of availability. If availability is defined by the number of drugs available for OTC sale, the United States appears to have the least restrictive system, because more of the 86 drugs are available for OTC sale than in any of the other countries. However, if availability is defined by the number of drugs for nonprescription sale regardless of any other restriction on their sale, the United States is more restrictive than Australia and the United Kingdom but slightly less restrictive than Italy and the Netherlands. The classification of drugs in other countries and the existence of other classes provide little insight into the likely effect of a BTC drug class on nonprescription drug availability in the United States. It is unclear whether establishing a BTC drug class in the United States would allow more drugs to be switched out of the prescription class. We did not find a consistent association between the classification of particular drugs in our sample by a given country and the drug classification system in that country. For example, the United States gave less restrictive classification to some drugs and more restrictive classification to other drugs when compared to the other four study countries. Twelve drugs (14 percent) in the sample had OTC status in the United States but a more restrictive status in all of the other study countries, including two drugs with OTC status in the United States but prescription status in all of the other study countries (see table 3). Conversely, we found that seven drugs (8 percent) in the sample had prescription status in the United States and nonprescription status in all other study countries (see table 4). Additionally, we found that seven drugs (8 percent) in the sample had prescription status in the United States, Italy, and the Netherlands but had nonprescription status in Australia and the United Kingdom, the two countries with a BTC drug class (see table 5). Study countries without a BTC drug class, therefore, had reduced availability of a small percentage of drugs when compared with the study countries using a BTC drug class. Pharmacist-, infrastructure-, and cost-related issues would have to be addressed before a BTC drug class could be established in the United States. Several issues involved with implementing a BTC drug class pertain to the roles and responsibilities of pharmacists, such as defining their BTC dispensing responsibilities and training needs. Infrastructure issues, such as establishing systems for the transfer of patient information and private consultation areas, would also be important if a BTC drug class were established. In addition, cost-related issues, such as the availability of third-party coverage for BTC drugs and counseling, would also be important considerations. If the United States were to establish a BTC drug class, it would be important to establish pharmacists’ responsibilities for dispensing BTC drugs. According to FDA, pharmacists’ responsibilities for dispensing BTC drugs could include but are not limited to reviewing or conducting an initial screening for clinical laboratory results, contraindications, or drug interactions; advising consumers on safe drug use; and monitoring for continued safe or effective use. Additionally, a pharmacist could be required to document interventions with consumers when dispensing BTC drugs. Experts told us that dispensing procedures could vary depending on the product and disease. Determining whether a standard set of BTC dispensing requirements would apply to all pharmacies and pharmacists across the country would also be important. Determining whether BTC drugs could be sold through mail-order and Internet pharmacies—where physical observation of the consumer would not be possible—would be important, as well as determining whether pharmacists in these settings would need to fulfill additional dispensing requirements, such as using screening questions designed for remote counseling to ensure appropriate drug use. Ensuring that pharmacists meet their responsibilities for dispensing BTC drugs, including providing necessary counseling, would be an important issue to resolve. One potential purpose of classifying drugs as BTC is for pharmacists to ensure that consumers meet specified criteria for using these drugs and then to provide education on the proper use of these drugs. Failure to ensure that such counseling occurs would diminish the value of a BTC drug class. In Australia, one of our study countries with a BTC drug class, a 2000 government-sponsored review of the drug classification system found that pharmacist counseling did not occur to the intended extent and called for an enhancement of professional standards for pharmacists. Australian agency officials told us that since the time of this report, pharmacists’ provision of counseling for BTC drugs has improved with the development of counseling standards and clarification of legislative controls which regulate these professional standards. Professional associations have also played a role in monitoring the quality of pharmacist counseling in Australia. A study examining the Quality Care Pharmacy Support Centre’s “mystery shopper” visits—used to monitor and provide feedback on Australian pharmacies’ performance since 2002—found that repeated mystery shopper visits led to notable improvement in pharmacists’ handling of nonprescription drugs. FDA officials told us that, if a BTC drug class were created in the United States, FDA would need to work with the states to determine the mechanisms through which oversight would be provided. An official from the National Association of Boards of Pharmacy told us that, if a BTC drug class were created, the state boards of pharmacy would need to establish national standards for a number of issues, including the types of data systems, consumer interactions, documentation, and expertise required for a BTC practice. This official noted that it could be challenging for the state boards of pharmacy to provide oversight for a BTC drug class because of resource constraints. Pharmacy practice experts and others have also raised concerns that BTC verbal counseling requirements would need to be more stringent than the counseling requirements associated with the Omnibus Budget Reconciliation Act of 1990 (OBRA ‘90). Under OBRA ‘90, consumers are allowed to waive their right to speak with a pharmacist, and according to pharmacy practice experts and others, many consumers do so. These experts told us that verbal counseling for a BTC drug class should be mandatory. Another consideration in the establishment of a BTC drug class would be determining if additional training would be needed for pharmacists and pharmacy staff and assessing whether all pharmacists and pharmacy staff would need to undergo this training. In pharmacist education today, more emphasis is being placed on patient care and assessment than was the case in earlier years. To fulfill degree requirements, pharmacy students must now earn a Doctor of Pharmacy degree, for which they are required to complete a minimum of 4 academic years, with at least 30 percent of the program spent in clinical training in settings such as community pharmacy, hospital pharmacy, ambulatory care, and acute care general medicine to develop advanced professional practice skills. However, one study found significant variation in courses used to teach patient assessment skills—which have been mentioned as potentially important for pharmacists providing BTC counseling. Further, pharmacists who received their education prior to the current shift toward patient care and assessment might not have the same skills and abilities as recent graduates. Consequently, several experts told us that additional training would be necessary for at least some pharmacists in order for them to appropriately dispense BTC drugs. At VA and IHS, where some pharmacists have expanded dispensing responsibilities and have been authorized to prescribe drugs, credentialing programs are used to assess pharmacists’ competencies before they are granted expanded privileges. Pharmacy associations have indicated that they could design and administer training for a BTC drug class. It would also be necessary to determine whether all pharmacists and pharmacy staff would be required to be trained in BTC-related skills. For example, one retail chain we spoke with suggested that each pharmacy should have the discretion to designate certain pharmacies or pharmacists who would be responsible for dispensing BTC drugs. However, if BTC drugs were only dispensed at certain pharmacies or by BTC-accredited pharmacists, confusion about how to access BTC drugs could result. To implement a BTC drug class, it would be important to evaluate whether a sufficient pharmacist workforce would be available to make such a program viable. Pharmacy practice experts told us that there is currently a pharmacist shortage that will continue for some time. The Health Resources and Services Administration found that pharmacists have experienced increasing demand for their time in part because of an increase in prescription volume and in part because of the increased amount of time needed to address insurance coverage problems for prescriptions. In addition, experts we interviewed raised the possibility that some pharmacists might not want to take on the additional duties associated with dispensing BTC drugs, which could further reduce the number of pharmacists available to participate. In other countries, pharmacists have been unwilling at times to dispense BTC products. For example, a survey of 1,156 community pharmacists regarding their views and early experiences with BTC simvastatin (a cholesterol-lowering drug) in Great Britain revealed that pharmacists had a number of concerns and infrequently sold simvastatin. Despite feeling well prepared to counsel on BTC simvastatin, pharmacists were still reluctant to dispense the drug without cholesterol or blood pressure testing—which was not required by the drug’s protocol—and therefore infrequently sold it. Another example of this can be seen in Florida, which in 1985 authorized pharmacists to independently prescribe certain drugs. Experts have stated that despite having this authority, pharmacists in Florida have rarely done so. Florida pharmacists’ rare use of their prescribing authority is primarily attributable to drugs being available without a prescription that are just as effective as those they are allowed to prescribe. Pharmacists were also concerned that they would increase their liability risk if they prescribed and they considered the recordkeeping requirements associated with prescribing a drug to be excessively time consuming. Having an inadequate number of pharmacists willing to carry out BTC functions could reduce the value of such a class. Communicating pharmacists’ new role to the public could influence demand for products and services and would be an important issue for the implementation and viability of a BTC drug class. Experts have stated that consumers would need to understand protocols for obtaining BTC drugs, such as the necessity of consulting with a pharmacist before obtaining a BTC drug. Consumers would also need to be aware that, after a consultation, the pharmacist could decide that a BTC drug is not appropriate for the consumer or that a physician visit is necessary. If a BTC drug class were established in the United States, consumers might need time to adjust to pharmacists’ new role under such a class. For example, in the United Kingdom, which has had a BTC drug class since 1968, pharmacists have expressed concerns that consumers have a poor understanding of the pharmacist’s role. Researchers have suggested that marketing pharmacists’ professional services could help to create a demand for these services. Officials with the United Kingdom Department of Health consider increasing the public’s awareness of pharmacist services to be a goal. According to these officials, while some consumers are comfortable obtaining health advice through a pharmacy, very few use a pharmacy’s full range of services. In implementing a BTC drug class, it would be important to determine whether restrictions on the size of a BTC drug class are necessary. Experts raised concerns that if a BTC drug class were too large or if whole categories of drugs switched to BTC at once, it could overwhelm pharmacies because of the time burden involved in dispensing these drugs and the need to train pharmacists and pharmacy staff on new procedures associated with BTC drugs. As a consequence, this situation could create unintended gaps in care by disrupting pharmacies’ regular prescription dispensing duties or interfering with their ability to provide BTC drugs. For instance, pharmacists in the United Kingdom found following different dispensing procedures for multiple drugs to be burdensome. Their ability to make appropriate recommendations was hampered by the time involved in following these procedures. However, some pharmacy officials we spoke with told us that restricting the size of a BTC drug class would not be necessary because pharmacists are accustomed to managing a large number of drugs for various individuals. An assessment of infrastructure needs would be important to the establishment of a BTC drug class in the United States. Implementing a BTC drug class could entail infrastructure changes for pharmacies so that pharmacists could have better patient information on which to base dispensing decisions. For example, data infrastructure enhancements would be necessary for some pharmacies to meet the possible record- keeping requirements of a BTC drug class and to facilitate communication between pharmacists and physicians. Other countries consider information-sharing systems important for supporting physician– pharmacist communication. For example, in the Netherlands, physicians and pharmacists communicate regularly through electronic prescribing systems, and health officials are developing a system that physicians and pharmacists can use to share patient-specific drug data. Electronic patient health information, such as laboratory results and diagnoses, could also help U.S. pharmacists make better decisions when dispensing BTC drugs. Although commentators note that most state regulations require that a patient drug profile be maintained at the pharmacy and reviewed prior to dispensing a drug, pharmacies currently have limited access to electronic patient health information. For example, a study of Nebraska pharmacists found that 6 percent of surveyed pharmacists had access to electronic patient health information from other providers. Additionally, a 2003 survey of community pharmacies from across the United States found that 54 percent of the respondent pharmacies were using a paper documentation system. Researchers have found several challenges associated with a paper system, all of which could impact implementation of a BTC drug class; these challenges include documentation time, retrieval of patient data, tracking consumer outcomes, and storage. Improving pharmacists’ access to patient information has been shown to improve decision-making. One study found that pharmacists performing drug utilization reviews made better decisions when they had access to more complete patient information on which to base decisions. The need for private pharmacy consultation areas is another important infrastructure issue that would require consideration if a BTC drug class were implemented. Several groups have identified a need to establish private counseling areas in pharmacies to ensure consumer privacy. Consumers might be reluctant to receive counseling from pharmacists if they have concerns about privacy. A study of Dutch pharmacies indicated that if individuals are aware that a pharmacy has a separate consultation room, they might be more likely to seek a private consultation with a pharmacist. Similarly, researchers have found that enclosed counseling areas in Australian pharmacies increase the likelihood that screening activities and other enhanced pharmacy services occur. One pharmacy practice expert told us that the majority of Australian pharmacies are including private consultation areas when updating their infrastructure. If private consultation rooms were required as part of a BTC drug class, U.S. pharmacies could incur costs to remodel their facilities. Although the National Association of Boards of Pharmacy currently recommends that U.S. pharmacies have a private area for confidential conversations, states have varied in requiring these areas. Several cost-related issues would be important for the establishment of a BTC drug class. One consideration would be the availability of third-party coverage of BTC drugs. Pharmacy association and consumer group officials we spoke with told us that the effect of a BTC drug class on consumers’ out-of-pocket drug expenses would depend on the reimbursement decisions of third-party payers such as health insurers, who often pay all or most of the cost of prescription drugs but generally do not pay for OTC products. A 1999 review of insurance plan benefits reported that less than one-third of plans covered selected OTC products and that less than one-third of plans continued to cover products switched from prescription to OTC status. A 2003 study found that although 39 of 43 state Medicaid programs reporting in 2003 covered some OTC drugs when ordered by a prescriber, only 12 provided coverage for OTC drugs that switched from prescription to OTC. HHS officials told us that legislative changes might be necessary to allow for Medicare Part D coverage of BTC drugs. A similar consideration concerns drugs now covered under the Medicaid program. If third-party payers do not reimburse consumers for drugs that were switched from prescription to BTC, consumers’ out-of-pocket expenditures could increase. The cost of nonreimbursable BTC drugs could also affect the extent to which consumers use BTC drugs. Evidence from other countries suggests that drug costs can be prohibitive to consumers. In the view of pharmacists in Great Britain, the high cost of BTC drugs such as omeprazole, especially relative to prescription or OTC alternatives, might deter consumers from using the drug. Another expensive BTC drug in the United Kingdom is simvastatin, which is intended for use by individuals who do not qualify for National Health Service coverage of statin treatment. However, according to pharmacists, the high cost of the drug could discourage some consumers from using it. Officials with the Medicines Evaluation Board of the Netherlands told us that consumers often oppose switches of drugs from prescription to pharmacy status because they lose insurance coverage when a drug becomes nonprescription. Therefore, in the Netherlands, prescription drugs that are not already covered by insurance are more likely to be considered viable switch candidates for pharmacy class status. A survey of individuals with indigestion or hypertension found that about half of all Italian respondents—regardless of their ability to pay for drugs—had obtained prescriptions for drugs that were available OTC in order to obtain insurance coverage because they considered the OTC products too expensive. The availability of third-party coverage for BTC counseling could influence pharmacists’ involvement in a BTC drug class and the quality of pharmacists’ services, and third-party coverage for BTC counseling could also increase drug prices. Officials from the American Pharmacists Association have raised concerns about whether it would be financially feasible for pharmacies to carry BTC drugs unless pharmacists were able to bill and be fully paid for the clinical services that might be required for a BTC drug class. In addition to influencing pharmacists’ willingness to participate in a BTC drug class, whether or not pharmacists are compensated might also affect their performance. One study of the factors that increase the prevalence of patient care services in community pharmacies found that paying pharmacists increased their detection of drug-related problems. Another study found that providing pharmacists with a financial incentive was associated with significantly higher documentation levels and higher advanced service levels. A counseling fee could lead to higher drug prices if pharmacist compensation were included in the price of BTC drugs. However, consumers might be willing to pay more for BTC drugs if they consider pharmacist services valuable. A survey of 2,500 adults in the United States found that the majority of respondents were willing to pay an out-of-pocket fee for pharmaceutical care services, even if they were not currently receiving such services. Third-party payers might also be willing to cover pharmacist services. In one diabetes management study, self-insured employers reimbursed pharmacists for consultation services, and based on the clinical improvements and financial savings associated with the diabetes management program, decided to retain the program as a permanent component of their health plan benefit. Compensation for BTC services might be necessary to offset increased liability. Additional liability could be incurred by pharmacists and pharmacies as a result of their participation in BTC counseling. Pharmacy officials raised the possibility that pharmacists participating in the implementation of a BTC drug class could have a greater exposure to liability because they would dispense drugs without a physician’s order. Concerns about liability might deter pharmacists from dispensing BTC drugs. For instance, such concerns were cited by pharmacists in Florida who were hesitant to use their prescribing authority. Costs might also be affected by new incentives necessary to encourage drug manufacturers to invest funds in a two-stage switch process (from prescription to BTC and then from BTC to OTC). Clinical trials, including actual use studies, are often conducted to help determine whether a drug could be switched from prescription to OTC, and an FDA official indicated that these trials may also be needed to determine if a product could be switched from prescription to BTC and from BTC to OTC. Currently, drug manufacturers may receive 3 additional years of exclusive marketing rights for drugs switched from prescription to OTC status if the switch requires additional clinical trials. Some FDA officials and manufacturers we spoke with believe it might be necessary to provide manufacturers with exclusive marketing rights for drugs switching from prescription to BTC status and also for BTC-to-OTC switches. It is unclear what period of exclusivity would make drug manufacturers’ investment in clinical studies for prescription-to-BTC and BTC-to-OTC switches worthwhile. However, FDA noted that granting this additional period of market exclusivity could reduce competition. HHS provided comments on a draft of this report. The comments are reproduced in appendix IV. In its comments HHS agreed that cost-related issues would have to be addressed before implementing a BTC drug class. HHS recommended that GAO add a discussion regarding the statutory authority to provide reimbursement under Medicare Part D for drugs that would be included in a BTC drug class if it were to be created. Such discussion is beyond the scope of this report, but we noted in the report that the ramifications for Medicare, as well as Medicaid, would need to be considered before establishment of a BTC drug class. HHS also suggested that a footnote in the report could mislead the reader to believe that the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) requires pharmacists to review Medicare beneficiaries’ prescription drug regimens as a component of MTM under Medicare Part D. While pharmacists are required to participate in the development of such programs, we added text to the main body of the report to clarify that MMA does not require that pharmacists furnish the services provided in MTM programs but also does not prohibit them from doing so. HHS stated that while MMA required Part D sponsors to implement MTM programs, the Part D program does not establish any payment schedules for either physicians or pharmacists performing MTM. HHS was concerned that a footnote in the report might be read to mean that MMA specified such a payment schedule. We have revised the text to clarify that MMA does not specify a payment schedule but that CMS often uses a rate that is 80 percent of the physician rate to determine their payments under Medicare. In addition, HHS and VA provided technical comments on the report draft, which we have incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Commissioner of the Food and Drug Administration, committees, and others. The report also will be available at no charge on the GAO’s Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. In light of the November 2007 Food and Drug Administration (FDA) public meeting to explore the public health implications of behind-the-counter (BTC) availability of certain drugs in the United States and the fundamental change that BTC availability would represent in the U.S. drug classification system, we are updating information we first presented in our 1995 report. Specifically, we are reporting on (1) the arguments that have been made supporting and opposing the creation of a BTC drug class in the United States; (2) changes in drug availability in our five study countries since 1995 and the impact of restricted nonprescription drug classes on drug availability; and (3) issues that would be important to the establishment of a BTC drug class. To describe the arguments that have been made supporting and opposing a BTC drug class in the United States, we reviewed published literature, reports, and meeting minutes of FDA hearings on prescription-to-over-the- counter (OTC) switches, and the transcript of and docket submissions for the November 2007 FDA meeting on BTC drug availability. We interviewed officials at FDA, pharmacy associations, drug manufacturers, consumer groups, and industry associations in the United States. We also interviewed academics and other officials knowledgeable about pharmaceutical practice. To determine the impact of restricted nonprescription drug classes on drug availability, we interviewed experts to ask them to help us identify countries that had evaluated drug classification in their countries since our 1995 report. Based on this information, we selected 5 of the 11 countries covered in our previous report. We also examined drug classification in the European Union (EU) because these factors affect drug availability in three of our study countries. We reviewed published literature, reports, and agency documents on drug classification and prescription-to-nonprescription switches. We also interviewed agency officials, industry representatives, and others knowledgeable about pharmaceutical practices and the relevant laws and regulations in our study countries. Although some countries—including the United States— place additional restrictions on certain prescription drugs, a complete analysis of prescription drug classification was beyond the scope of this report. For our analysis, all prescription drugs were placed in the same class. We examined changes since 1995 in the drug classification systems in two study countries (Italy and the Netherlands) that changed the number or type of nonprescription drug classes in use. We also determined the number of drugs switched from one drug class to another (e.g., prescription to BTC) between 1995 and 2008 for the three study countries—Australia, the United Kingdom, and the United States—that maintained the same number and type of nonprescription drug classes during that time. We examined relevant documents and interviewed knowledgeable officials in those countries. We counted the first switch of a particular drug (e.g., ibuprofen) from one drug class to another that occurred after January 1, 1995, but did not count subsequent switches of additional products containing the same drug between the same two drug classes (except in the case of different nicotine dosage forms such as gum and patches) in order to achieve consistency with the World Self- Medication Industry (WSMI) data discussed below. We also did not count switches that changed only the allowable dosage, pack size, or indications for a drug that had previously been switched. In some cases, a switch in one country may have involved a drug that was not approved for use in one of the other countries or was not subject to regulation as a drug in another country. Additionally, we determined the classification of selected drugs in the United States and the other study countries. We selected a sample of drugs using the WSMI databases that describe the classification status— prescription or nonprescription—of more than 200 drugs in 36 countries. We used the February 1, 2007, tables; these were the most recent tables available at the time we were conducting our study. We excluded from the list any drugs listed as “not registered” or with a blank entry for Australia, the Netherlands, the United Kingdom, or the United States; a sample of 110 drugs resulted. We examined the survey format used to collect information on drug classification and response rates from the most recent survey, and determined that the data were sufficiently reliable for our purposes. After drawing the initial sample from the WSMI tables, we added Italy to our scope. We then determined the classification status of the sample drugs in each of the study countries using agency information including information from knowledgeable agency officials and an examination of the Standard for the Uniform Scheduling of Drugs and Poisons No. 22 (Australia); the Prontuario Farmaceutico Nazionale and the Elenco indicativo dei farmaci SOP e OTC in commercio con prezzo in vigore al 31/12/2006 ai sensi del comma 802 dell’art. 1 Legge 27 dicembre 2006, n. 296 (Italy); the Medicines Evaluation Board Database Human Medicines (Netherlands); List A: Consolidated list of substances which are present in prescription only medicines (POM), with exemptions for pharmacy sale or supply (P), List B: Consolidated list of substances which are present in authorised medicines for general sale, and List C: Consolidated list of substances which are present in authorised products which have been reclassified since 1 April 2002 (United Kingdom); the list of Approved Drug Products with Therapeutic Equivalence Evaluations, 28th Edition (i.e., the Orange Book) (United States); and other agency documents. The data in these reference documents are standard data sources published by each country’s regulatory authority and were sufficiently reliable for our purposes. We found that 24 of the 110 drugs in our initial sample were not approved in one or more of the five study countries and eliminated these drugs from our sample, resulting in a final sample of 86 drugs. We then compared the classification of these drugs across the five study countries in order to determine the least restrictive class to which each drug was assigned regardless of pack size, dosage, or combination ingredients. To identify issues that would be important to the establishment of a BTC drug class in the United States, we interviewed officials at FDA, the Centers for Medicare & Medicaid Services (CMS), the Department of Veterans Affairs (VA), the Indian Health Service (IHS), pharmacist associations, drug manufacturers, consumer groups, and industry associations. We interviewed academics and other experts knowledgeable about pharmacists’ prescribing authority, including individuals who have testified to FDA on the possible creation of a BTC drug class in the United States. We also interviewed agency officials, industry representatives, pharmacist association representatives, and others knowledgeable about pharmacy practices in our other study countries. We reviewed reports and the transcript of and docket submissions for the November 2007 FDA meeting on BTC drugs. We also reviewed published, peer-reviewed pharmacy practice literature, focusing on articles published since our 1995 report and relating to the United States or our other study countries. For this literature review, we searched 67 databases, including International Pharmaceutical Abstracts, EMBASE, Pharmaceutical News Index, Gale Group Health & Wellness Database, Pharm-Line, Science Citation Index, and MEDLINE. Key search terms used were pharmacy practice, pharmacist counseling, pharmacist intervention, pharmacist prescribing authority, pharmaceutical care, collaborative practice, medication therapy management, drug classification, and drug reclassification. We also reviewed literature cited in these studies and studies recommended to us by those we interviewed. We conducted our work from March 2008 through February 2009 in accordance with all sections of GAO’s quality assurance framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. In this appendix, we describe the drug classification systems in our five study countries—Australia, Italy, the Netherlands, the United Kingdom, and the United States. We also describe drug classification in the EU because it affects drug availability in three of our study countries. Although the terms used for different drug classes are included for each country, a standardized set of terminology is also used to facilitate comparisons (see fig. 2). The Therapeutic Goods Administration within the Department of Health and Ageing is responsible for the evaluation and approval of new drugs in Australia. The National Drugs and Poisons Schedule Committee makes recommendations on the appropriate classification of drugs and is also responsible for all decisions to switch a drug from one class to another. Generally, the committee requires that a drug be marketed for 2 years before it will consider allowing it to move to a less restrictive classification; applications that do not meet this requirement may be considered when sufficient evidence is presented. While each state or territory has the authority to determine drug classification independently, all states and territories agreed in 2005 to adopt the national scheduling committee’s decisions in full in order to reduce barriers to commerce in Australia. Australia has a complex, multilevel classification system for drugs that includes Schedule 2 (equivalent to pharmacy), Schedule 3 (equivalent to BTC), Schedule 4 (equivalent to prescription), Schedules 5 and 6 (certain essential oils for human therapeutic use, as well as household and agricultural chemicals); available without a prescription at nonpharmacy outlets, and Schedule 8 (controlled substances for which restrictions on availability are necessary to reduce abuse, misuse, or dependence; e.g., opioids and amphetamines). Additionally, if a substance does not appear in a schedule, it is referred to as unscheduled; unscheduled drugs can be supplied to the public from any retail outlet (i.e., these are OTC drugs). Schedule 1 (formerly containing a number of toxic volatile oils) is not currently in use; Schedule 7 contains dangerous agricultural and industrial poisons; and Schedule 9 contains substances whose manufacture, possession, sale, or use is prohibited except under specific circumstances (e.g., heroin and cannabis). The Italian Pharmaceutical Agency (l’Agenzia Italiana del farmaco), an autonomous agency under the oversight of the Ministry of Health, authorizes the marketing of drugs in Italy. The agency is assisted by the Scientific and Technical Committee which evaluates and issues opinions on marketing applications. Italy currently has the following drug classes: nonprescription. All nonprescription drugs in Italy are available for sale in nonpharmacy outlets such as supermarkets as long as a pharmacist is on the premises. The requirement that a pharmacist be present wherever nonprescription drugs are sold means that nonprescription drugs in Italy are in an OTC/pharmacist class. However, Italian officials are evaluating the possibility of making small packs of some drugs available in nonpharmacy outlets without the presence of a pharmacist. The agency is also responsible for the decision to switch drugs from one classification to another. Switches from prescription to nonprescription status are generally initiated by the manufacturer; switches from nonprescription to prescription status are much less frequent. Drugs can also be switched from prescription to more restrictive classification on rare occasions for safety reasons. The Medicines Evaluation Board (College ter Beoordeling van Geneesmiddelen) is responsible for drug approval and classification in the Netherlands. The Medicines Evaluation Board is also responsible for approving requests to switch a drug; requests for OTC classification must be initiated by the company holding the marketing authorization. The drug classification system in the Netherlands includes four categories: pharmacy only (equivalent to pharmacy), pharmacy and drugstore (equivalent to drugstore), and general sale (equivalent to OTC). The last three categories are all nonprescription and differ primarily in the locations at which the drugs are available for sale. OTC classification in the Netherlands is based, in part, on a determination of public benefit and the risk–benefit profile of the drug. The drugstore class is the default class for all nonprescription drugs that do not meet the criteria for pharmacy or OTC sale. In the Netherlands, the distinction between drugstore and OTC classification is often based on the dosage or pack size; large pack sizes or higher dosages of a drug might be restricted to pharmacies and drugstores even when smaller pack sizes or lower dosages are available for OTC sale. The pharmacy class in the Netherlands is reserved for drugs requiring interaction with pharmacy staff although not necessarily a pharmacist; agency officials told us that they do not expect to place many drugs into this class. At the time of our analysis, six drugs were assigned to the pharmacy class: domperidone (to suppress nausea), orlistat (weight loss aid), aliskiren (treatment of hypertension), clotrimazole (antifungal agent), hexamidine, and dextromethorphan (cough suppressant). The Medicines and Healthcare products Regulatory Agency (MHRA) within the Department of Health is responsible for drug approval and classification in the United Kingdom. The United Kingdom continues to use the three-tier drug classification system that was in place in 1995. This system includes prescription only (equivalent to prescription), pharmacy (equivalent to BTC), and general sale list (equivalent to OTC). The presumption under law is that all drugs are restricted to the BTC drug class unless they meet the criteria for prescription or OTC status. MHRA encourages wider availability of drugs as soon as there is adequate evidence of safety in use. Manufacturers or other interested parties can initiate switches, which proceed in a stepwise manner (prescription to BTC, then BTC to OTC). Experience gained at one level is used to inform the decision to switch the drug to the next level. For example, MHRA guidelines indicate that substances suitable for OTC classification will have been in widespread use in BTC products for many years. Switching to more restrictive classes can also occur when warranted; this was done for large pack sizes of paracetamol in 1998 in an attempt to reduce adverse events associated with this drug. In the United States, FDA has authority to approve drugs before they are marketed, to ensure that they are safe and effective, and to determine whether they will be available only by prescription. The United States uses a two-class drug system—prescription and nonprescription—established by the 1951 Durham-Humphrey Amendments to the Federal Food, Drug, and Cosmetic Act. Prescription drugs can be dispensed only with written or oral orders (i.e., a prescription) from a licensed prescriber—such as a doctor, nurse practitioner, or physician’s assistant—to a pharmacist or other licensed dispenser, while nonprescription drugs do not require a prescription. Although most nonprescription drugs in the United States are publicly available without any restrictions, a few are stored behind the counter due to refrigeration requirements (e.g., insulin), to monitor quantity of purchase (e.g., pseudoephedrine), or are restricted to pharmacy sale in order to monitor consumer age (e.g., levonorgestrel). Nonprescription drugs are often referred to in the United States as OTC drugs. Between 1995 and 2007, FDA assigned 99.7 percent (1233 out of 1237) of newly approved drugs to the prescription class, with four new drugs—a topical herpes simplex treatment, a sunscreen product, a nicotine lozenge, and a product to block contact with poison ivy—classified as nonprescription. Drugs can be switched from prescription to OTC status in a number of ways, including through rulemaking or submission of a supplemental new drug application to FDA by the sponsor. In making switch decisions, FDA may seek advice from the Nonprescription Drugs Advisory Committee, often in conjunction with an appropriate specialty committee (e.g., the Pediatric Advisory Committee or the Gastrointestinal Drugs Advisory Committee). Although not bound by the advisory committee’s advice, FDA follows the committee’s recommendation most of the time. The Pharmaceuticals Unit of the European Commission Directorate General for Enterprise and Industry is responsible for approving new drugs submitted for marketing throughout the EU, and the European Medicines Agency makes a recommendation on whether the new drug should receive prescription or nonprescription status. The EU leaves to each member state the decision on whether to use subcategories within the prescription and nonprescription classes. Pharmacy experts told us that European countries have a long tradition of restricting drug sales to pharmacies and that about 60 percent of EU countries do not have an OTC drug class. There are four primary methods to receive marketing approval for a drug in the EU. These include national authorization procedures that allow a drug to be marketed in a specific country based on an individual application and for which the classification decision is made by the appropriate national authority, plus three methods that are handled at the EU level: The centralized approval procedure allows applicants to market an approved product throughout the EU with a single application. The mutual recognition procedure can be used to request that a drug approval from one EU country be recognized as valid in one or more other EU countries. The decentralized procedure allows a company to apply for simultaneous approval in multiple EU countries for a drug that is not yet approved in any EU country. Centralized approval is required for certain categories of drugs, including all drugs developed through biotechnology; drugs for the treatment of certain diseases, including acquired immunodeficiency syndrome (AIDS), cancer, neurodegenerative diseases, diabetes, and autoimmune diseases; and orphan drugs. Although most drugs currently on the market in the EU were originally approved through national authorization procedures prior to the development of a centralized approval process, about 95 percent of new drugs brought to market are now approved through the centralized procedures. The first-ever application for the centralized switch of a drug (orlistat) from prescription to nonprescription status, which will make orlistat available without a prescription in all EU countries, was recently approved. Orlistat was granted centralized approval as a prescription drug in 1998. Rx Rx Legend: Rx = prescription; BTC = behind-the-counter; P = pharmacy; D = drugstore; OTC/P = over- the-counter (pharmacist reuired); OTC = over-the-counter. In addition to the contact above, Thomas Conahan, Assistant Director; Robert Copeland; Cathy Hamann; Karen Howard; Kristen Jones; Marisa Lee; and Julian Klazkin made key contributions to this report.
In the United States, most nonprescription drugs are available over-the-counter (OTC) in pharmacies and other stores. Experts have suggested that drug availability could be increased by establishing an additional class of nonprescription drugs that would be held behind the counter (BTC) but would require the intervention of a pharmacist before being dispensed; a similar class of drugs exists in many other countries. Although the Food and Drug Administration (FDA) has not developed a detailed proposal for a BTC drug class, it held a public meeting in 2007 to explore the public health implications of BTC drug availability. GAO was asked to update its 1995 report, Nonprescription Drugs: Value of a Pharmacist-Controlled Class Has Yet to Be Demonstrated (GAO/PEMD-95-12). Specifically, GAO is reporting on (1) arguments supporting and opposing a U.S. BTC drug class, (2) changes in drug availability in five countries since 1995 and the impact of restricted nonprescription classes on availability, and (3) issues important to the establishment of a BTC drug class. GAO reviewed documents and consulted with pharmaceutical experts. To examine drug availability across countries, GAO studied five countries it had reported on in 1995 (Australia, Italy, the Netherlands, the United Kingdom, and the United States) and determined how 86 drugs available in all five countries were classified in each country. Arguments supporting and opposing a BTC drug class in the United States have been based on public health and health care cost considerations, and reflect general disagreement on the likely consequences of establishing such a class. Proponents of a BTC drug class suggest it would lead to improved public health through increased availability of nonprescription drugs and greater use of pharmacists' expertise. Opponents are concerned that a BTC drug class might become the default for drugs switching from prescription to nonprescription status, thus reducing consumers' access to drugs that would otherwise have become available OTC, and argue that pharmacists might not be able to provide high quality BTC services. Proponents of a BTC drug class point to potentially reduced costs through a decrease in the number of physician visits and a decline in drug prices that might result from switches of drugs from prescription to nonprescription status. However, opponents argue that out-of-pocket costs for many consumers could rise if third-party payers elect not to cover BTC drugs. All five countries GAO studied have increased nonprescription drug availability since 1995 by altering nonprescription classes or reclassifying some drugs into less restrictive classes. Italy and the Netherlands, which previously allowed nonprescription drugs to be sold only at specialized drug outlets, made some or all of these drugs available for OTC sale. Australia, the United Kingdom, and the United States switched certain drugs from more restrictive to less restrictive drug classes, increasing these drugs' availability. However, the impact of restricted nonprescription drug classes on availability is unclear. When we examined the classification of 86 selected drugs in the five countries, we found that the United States required a prescription for more of those drugs than did Australia or the United Kingdom--the study countries using a BTC drug class. However, the United States classified more of the 86 drugs as OTC--the option that provides greatest access to these drugs for consumers--than all four of the other study countries. Pharmacist-, infrastructure-, and cost-related issues would have to be addressed before a BTC drug class could be established in the United States. For example, ensuring that pharmacists provide BTC counseling and that pharmacies have the infrastructure to protect consumer privacy would be important. Issues related to the cost of BTC drugs would also require consideration. For example, the availability of third-party coverage for BTC drugs would affect consumers' out-of-pocket expenditures and pharmacists' compensation for providing BTC services would need to be examined. In commenting on a draft of this report, the Department of Health and Human Services (HHS) agreed that cost-related issues would have to be addressed before implementing a BTC drug class and also provided technical comments. The Department of Veterans Affairs (VA) also reviewed the report and provided technical comments. We have incorporated HHS and VA technical comments as appropriate.
Advances in information technology and the explosion in computer interconnectivity have had far-reaching effects, including the transformation from a paper-based to an electronic business environment and the capability for rapid communication through e- mail. Although these developments have led to improvements in speed and productivity, they also require the development of ways to manage information that is increasingly in electronic rather than paper form. For federal agencies, such information includes e-mail messages that may have the status of federal records. Under the Federal Records Act, each federal agency is required to make and preserve records that (1) document the organization, functions, policies, decisions, procedures, and essential transactions of the agency and (2) provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. These records, which include e-mail records, must be effectively managed. If they are not, individuals might lose access to benefits for which they are entitled, the government could be exposed to unwarranted legal liabilities, and historical records of vital interest could be lost forever. In addition, agencies with poorly managed records risk increased costs when attempting to search their records in response to Freedom of Information Act requests or litigation-related discovery actions. Accordingly, agencies are required to develop records management programs to ensure that they have appropriate recordkeeping systems with which to manage and preserve their records. Among the activities of a records management program are identifying records and sources of records and providing records management guidance, including agency-specific recordkeeping practices that establish what records need to be created in order to conduct agency business. Agencies are also required to schedule their records: that is, to identify and inventory records, appraise their value, determine whether they are temporary or permanent, and determine how long the temporary records should be kept. The act also gives the National Archives and Records Administration (NARA) responsibilities for oversight and guidance of federal records management, which includes management of e-mail records. NARA works with agencies to schedule records, and it must approve all records schedules. Records schedules may be specific to an agency, or they may be general, covering records common to several or all agencies. According to NARA, records covered by general records schedules make up about a third of all federal records. For the other two thirds, NARA and the agencies must agree upon specific records schedules. No record may be destroyed unless it has been scheduled. For temporary records, the schedule is of critical importance, because it provides the authority to dispose of the record after a specified time period. (For example, General Records Schedule 1, Civilian Personnel Records, provides instructions on retaining case files for merit promotions; agencies may destroy these records 2 years after the personnel action is completed, or after an audit by the Office of Personnel Management, whichever is sooner.) Once a schedule has been approved, the agency must issue it as a management directive, train employees in its use, and apply its provisions to temporary and permanent records. NARA has issued regulations that specifically address the management of e-mail records. As with other records, agencies are required to establish policies and procedures that provide for appropriate retention and disposition of e-mail records. NARA further specified that for each e-mail record, agencies must preserve certain transmission data—names of sender and addressees and message date. Further, except for a limited category of “transitory” e-mail records, agencies are not permitted to store the recordkeeping copy of e-mail records in the e-mail system, unless that system has certain features, such as the ability to group records into classifications according to their business purposes and to permit easy and timely retrieval of both individual records and groupings of related records. These recordkeeping features are important to ensure that e-mail records remain both accessible and usable during their useful lives. For example, it is essential to be able to classify records according to their business purpose so that they can be retrieved in case of mission need. Further, if records cannot be retrieved easily and quickly, or they are not retained in a usable format, they do not serve the mission or historical purpose that led to their being preserved. If agencies do not keep their e-mail records in systems with the required capabilities, records may also be at increased risk of loss from inadvertent or automatic deletion. If agency e-mail systems do not have the required recordkeeping features, either agencies must copy e-mail records to a separate electronic recordkeeping system, or they must print e-mail messages (including associated transmission information that is needed for purposes of context) and file the copies in traditional paper recordkeeping files. NARA’s regulations allow agencies to use either paper or electronic recordkeeping systems for record copies of e- mail messages, depending on the agencies’ business needs. The advantages of using a paper-based system for record copies of e-mails are that it takes advantage of the recordkeeping system already in place for the agency’s paper files and requires little or no technological investment. The disadvantages are that a paper-based approach depends on manual processes and requires electronic material to be converted to paper, potentially losing some features of the electronic original; such manual processes may be especially burdensome if the volume of e-mail records is large. The advantage of using an electronic recordkeeping system, besides avoiding the need to manage paper, is that it can be designed to capture certain required data (such as transmission data) automatically. Electronic recordkeeping systems also make searches for records on particular topics much more efficient. In addition, electronic systems that are integrated with other applications may have features that make it easier for the user to identify records, and potentially could provide automatic or partially automatic classification functions. However, as with other information technology investments, acquiring an electronic recordkeeping system requires careful planning and analysis of agency requirements and business processes; in addition, electronic recordkeeping raises the issue of maintaining electronic information in an accessible form throughout its useful life. Because of its nature, e-mail can present particular challenges to records management. First, the information contained in e-mail records is not uniform: it may concern any subject or function and document various types of transactions. As a result, in many cases, decisions on which e-mail messages are records must be made individually. Second, the transmission data associated with an e- mail record—including information about the senders and receivers of messages, the date and time the message was sent, and any attachments to the messages—may be crucial to understanding the context of the record. Third, a given message may be part of an exchange of messages between two or more people within or outside an agency, or even of a string (sometimes branching) of many messages sent and received on a given topic. In such cases, agency staff need to decide which message or messages should be considered records and who is responsible for storing them in a recordkeeping system. Finally, the large number of federal e-mail users and high volume of e-mails increase the management challenge. According to NARA, the use of e-mail results in more records being created than in the past, as it often replaces phone conversations and face-to-face meetings that might not have been otherwise recorded. These challenges have been recognized by NARA and the records management community in numerous studies and articles. A 2001 survey of federal recordkeeping practices conducted by a contractor for NARA concluded, among other things, that managing e-mail was a major records management problem and that the quality of recordkeeping varied considerably across agencies. In addition, the study concluded that for many federal employees, the concept of a “record” and what should be scheduled and preserved was not clear. A 2005 NARA-sponsored survey of federal agencies’ policy and practices for electronic records management concluded that procedures for managing e-mail were underdeveloped. The study, performed by the University of Maryland Center for Information Policy, stated that most of the surveyed offices had not developed electronic recordkeeping systems, but were instead maintaining recordkeeping copies of e-mail and other electronic documents in paper format. However, all of the offices also maintained electronic records (frequently electronic duplicates of paper records). According to the study team, the agencies did not establish electronic recordkeeping systems due to financial constraints, and implementing such systems was a considerable challenge that increased with the size of the agency. As a result, organizations were maintaining unsynchronized parallel paper and electronic systems, resulting in extra work, confusion regarding which is the recordkeeping copy, and retention of many records beyond their disposition date. Most recently, a NARA study team examined in 2007 the experiences of five federal agencies (including itself) with electronic records management applications, with a particular emphasis on how these organizations used these applications to manage e-mail. The purpose of the study was to gather information on the strategies that organizations are using that may be useful to others. Among the major conclusions from the survey was that implementing an electronic records management application requires considerable effort in planning, testing, and implementation, and that although the functionality of the software product itself is important, other factors are also crucial, including agency culture, training provided, and management and information technology support. With regard to e-mail in particular, the survey concluded that e-mail messages can constitute the most voluminous type of record that is filed into records management applications. Our work on e-mail records management demonstrates that agencies continue to face challenges similar to those identified by the prior studies. While our results are preliminary and we are not able to project them beyond the agencies we reviewed, I believe they help illustrate the difficulties agencies can face when applying NARA’s requirements to today’s operating environment. Three of the four agencies we reviewed—FTC, DHS, and EPA—had policies in place that generally complied with NARA’s guidance on how to identify and preserve e-mail records, but each was missing one applicable requirement. Specifically, the policies at EPA and FTC did not instruct staff on the management and preservation of e- mail records sent or received from nongovernmental e-mail systems (such as commercial Web-based systems). Both EPA and FTC officials told us that these instructions were not provided because the staff were informed that use of outside e-mail systems for official business was prohibited. However, whenever access to such external systems is available at an agency, providing these instructions is still required. DHS’s policy did not specify that draft documents circulated via e-mail may be federal records. DHS officials recognized that their policies did not specifically address the need to assess the records status of draft documents, and said they planned to address the omission during an ongoing effort to revise the policies. The policy at one of the four agencies, HUD, was missing three of eight applicable requirements. One element of the policy was inconsistent with NARA’s regulation: it required only the sender of an e-mail message to review it for potential records status, but the regulation states that e-mail records could include both messages sent or received. HUD officials acknowledged that its policy omits the recipient’s responsibility for determining the record status of e- mail messages and stated that its e-mail policy fell short of fully implementing NARA regulations in this regard because the department’s practice is not to use e-mail for business matters in which official records would need to be created. However, this practice does not remove the requirement for agency employees to assess e-mail received for its record status, because the agency cannot know that employees will not receive e-mail with record status; the determination of record status depends on the content of the information, not its medium. In addition, two other requirements were missing from HUD’s policy: it did not state, as required, that recordkeeping copies of e- mail should not be stored in e-mail systems and that backup tapes should not be used for recordkeeping purposes. HUD officials stated that they considered that these requirements were met by a reference in their policy to the NARA regulations in which these requirements appear. However, this reference is not sufficient to make clear to staff that e-mail systems and backup tapes are not to be used for recordkeeping. While agency policies were generally compliant with recordkeeping regulations, these policies were not applied consistently. Specifically, for 8 of the 15 senior officials we reviewed, e-mail messages that qualified as records were not being appropriately identified and preserved. Instead, the officials generally kept every message within their e-mail systems. Each of the four agencies generally followed a print and file process to preserve e-mail records in paper-based recordkeeping systems because their e-mail systems did not have required record-keeping capabilities. Factors contributing to this lack of compliance with recordkeeping requirements were the lack of adequate staff support and the volume of e-mail received—several of these officials had thousands or even tens of thousands of messages in their e-mail system accounts. Another reason was that keeping every e-mail ensured that no information was lost, which was seen as safe from a legal standpoint. However, by keeping every message, they were potentially increasing the time and effort that would be needed to search through and review all the saved messages in response to an outside inquiry, such as a Freedom of Information Act request. In addition, by not keeping the e-mail in an appropriate recordkeeping system, these officials were making it more difficult for their agencies to find information by subject. Appropriately identifying and saving record material also allows agencies to avoid expending resources on unnecessarily preserving nonrecord material and on keeping record material beyond its usefulness (that is, beyond the date when it can be disposed of according to the records schedule). In contrast, many of the officials whose e-mail records were appropriately managed delegated responsibility for this task to one or more administrative staff members. These individuals were responsible for identifying which e-mail messages qualified as records and ensuring that the message and any attachments were preserved according to the agency’s records management policies. Generally, this required that they print the message, including any attachments and transmission information (who the message was to and from and when it was sent), and place the paper copy in a file. Printing and filing copies of e-mail records is acceptable under NARA’s regulations. However, printing copies of e-mails can lead to an agency maintaining multiple copies of the message in both paper and electronic formats, which can lead to agencies’ expending resources on duplicative storage, as well as confusion over which is the recordkeeping copy. Further, as with all electronic documents, conversion to paper entails the risk of losing some features of the electronic original. Awareness of federal records requirements is also an ongoing concern. At one department, training for senior officials on their records management responsibilities took place only at the beginning of the current administration. Officials who joined the department subsequently were not trained on records management. Similarly, several administrative staff responsible for managing the e-mail of senior officials told us that they had not been trained to recognize a record. A draft bill, the Electronic Communications Preservation Act, would mandate agencies to transition to electronic records management by requiring the Archivist of the United States to promulgate regulations governing agency preservation of electronic communications that are federal records. Among other things, the regulations would ● require the electronic capture, management, and preservation of ● require that such electronic records are readily accessible for retrieval through electronic searches; and ● require the Archivist to develop mandatory minimum functional requirements for electronic records management applications to meet the first two requirements. The legislation would also require agencies to comply with the new regulations within 4 years of enactment. Requiring a governmentwide transition to electronic recordkeeping systems could help federal agencies improve e-mail management. For example, storing e-mail records in an electronic repository could make them easier to search and potentially speed agency responses to Freedom of Information Act requests. As our review shows, agencies recognize that devoting significant resources to creating paper records from electronic sources is not a viable long- term strategy and have accordingly begun to plan or implement such a system. The 4-year deadline in the draft bill could help expedite this transition. In addition, the development of minimum functional requirements by NARA should reduce the development risk that could have resulted from multiple agencies concurrently developing similar systems. By providing time both for standards to be developed and implemented by agencies, these provisions recognize the need for a well-planned process. Like any investment in information technology, the development of electronic recordkeeping systems will have to be carefully managed to avoid unnecessary cost and performance risks. However, once implemented, such systems could potentially provide the efficiencies of automation and avoid the expenditure of resources on duplicative manual processes and storage. In summary, the increasing use of e-mail is resulting in records management challenges for federal agencies. For example, the large number of federal e-mail users and the high volume of e-mails present challenges, particularly in the current paper-based environment. While agency e-mail policies generally contained required elements, about half of the senior officials we reviewed were not following these policies and were instead maintaining their e-mail messages within their e-mail accounts, where records cannot be efficiently searched, are not accessible to others who might need the information in the records, and are at increased risk of loss. Several agencies are considering developing electronic recordkeeping systems, but until such systems are implemented, agencies may have reduced assurance that information that is essential to protecting the rights of individuals and the federal government is being adequately identified and preserved. Mr. Chairman, this concludes my testimony today. I would be happy to answer any questions you or other members of the subcommittee may have. If you have any questions concerning this testimony, please contact Linda Koontz, Director, Information Management Issues, at (202) 512-6240, or koontzl@gao.gov. Other individuals who made key contributions to this testimony were Timothy Case, Barbara Collier, Jennifer Stavros-Turner, and James Sweetman. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Federal agencies are increasingly using electronic mail (e-mail) for essential communication. In doing so, they are potentially creating messages that have the status of federal records, which must be managed and preserved in accordance with the Federal Records Act. To carry out the records management responsibilities established in the act, agencies are to follow implementing regulations that include specific requirements for e-mail records. In view of the importance that e-mail plays in documenting government activities, GAO was asked to testify on issues relating to the preservation of electronic records, including e-mail. As agreed, GAO's statement discusses challenges facing agencies when managing their e-mail records, as well as current policies and practices for managing e-mail messages that qualify as federal records. This testimony is primarily based on preliminary results of ongoing work, in which GAO is examining, among other things, e-mail policies at four agencies of contrasting sizes and structures (the Department of Homeland Security, the Environmental Protection Agency, the Federal Trade Commission, and the Department of Housing and Urban Development), as well as the practices of selected senior officials. E-mail, because of its nature, presents challenges to records management. First, the information contained in e-mail records is not uniform: it may concern any subject or function and document various types of transactions. As a result, in many cases, decisions on which e-mail messages are records must be made individually. Second, the transmission data associated with an e-mail record--including information about the senders and receivers of messages, the date and time the message was sent, and any attachments to the messages--may be crucial to understanding the context of the record. Third, a given message may be part of an exchange of messages between two or more people within or outside an agency, or even of a string (sometimes branching) of many messages sent and received on a given topic. In such cases, agency staff need to decide which message or messages should be considered records and who is responsible for storing them in a recordkeeping system. Finally, the large number of federal e-mail users and high volume of e-mails increase the management challenge. Preliminary results of GAO's ongoing review of e-mail records management at four agencies show that not all are meeting the challenges posed by e-mail records. Although the four agencies' e-mail records management policies addressed, with a few exceptions, the regulatory requirements, these requirements were not always met for the senior officials whose e-mail practices were reviewed. Each of the four agencies generally followed a print and file process to preserve e-mail records in paper-based recordkeeping systems, but for about half of the senior officials, e-mail records were not being appropriately identified and preserved in such systems. Instead, e-mail messages were being retained in e-mail systems that lacked recordkeeping capabilities. (Among other things, a recordkeeping system allows related records to be grouped into classifications according to their business purposes.) Unless they have recordkeeping capabilities, e-mail systems may not permit easy and timely retrieval of groupings of related records or individual records. Further, keeping large numbers of record and nonrecord messages in e-mail systems potentially increases the time and effort needed to search for information in response to a business need or an outside inquiry, such as a Freedom of Information Act request. Factors contributing to this practice were the lack of adequate staff support and the volume of e-mail received. In addition, agencies had not ensured that officials and their responsible staff received training in recordkeeping requirements for e-mail. If recordkeeping requirements are not followed, agencies cannot be assured that records, including information essential to protecting the rights of individuals and the federal government, is being adequately identified and preserved.
Violence against women can include a range of behaviors such as hitting, pushing, kicking, sexually assaulting, using a weapon, and threatening violence. Violence sometimes includes verbal or psychological abuse, stalking, or enforced social isolation. Victims are often subjected to repeated physical or psychological abuse. The federal public health agencies that address violence against women include CDC, NIH, HRSA, and the Substance Abuse and Mental Health Services Administration (SAMHSA). They focus on activities such as defining and measuring the magnitude of violence, identifying causes of violence, and evaluating and disseminating promising prevention, intervention, and treatment strategies. CDC’s National Center for Injury Prevention and Control and National Center for Chronic Disease Prevention and Health Promotion have funded efforts to document the prevalence of violence against women, improve maternal health, and prevent intimate partner violence. CDC’s National Center for Health Statistics operates the National Vital Statistics System, which maintains a national database of death certificate information. The National Center for Health Statistics has a contract with each state to support routine production of annual vital statistics data, generally covering from one- fourth to one-third of state vital statistics operating costs. NIH has funded research to study violence against women through several of its institutes—the National Institute on Alcohol Abuse and Alcoholism, National Institute of Child Health and Human Development, National Institute on Drug Abuse, National Institute of Nursing Research, and National Institute of Mental Health—and the National Center for Research Resources. HRSA’s Maternal and Child Health Bureau, as part of its mission to promote and improve the health of mothers and children, funds demonstration grant programs that focus on violence against women during the prenatal period. SAMHSA funds efforts focused on the mental health and substance abuse treatment of women who have been victims of violence. The federal criminal justice agencies that address violence against women are OJP’s Violence Against Women Office (VAWO), National Institute of Justice (NIJ), and Bureau of Justice Statistics (BJS). Using VAWA funds, VAWO administers grants to help states, tribes, and local communities improve the way criminal justice systems respond to intimate partner violence, sexual assault, and stalking. VAWO also works with victims’ advocates and law enforcement agencies to develop grant programs that support a range of services for victims, including advocacy, emergency shelters, law enforcement protection, and legal aid. VAWO administers these funds through both formula and discretionary grant programs. NIJ conducts and funds research on a variety of topics, including violence, drug abuse, criminal behavior, and victimization. BJS collects, analyzes, publishes, and disseminates information on crime, criminal offenders, victims of crime, and the operation of justice systems at all levels of government. The FBI administers the Uniform Crime Reporting Program (UCR). Under this program, city, county, and state law enforcement agencies voluntarily provide information on eight crimes occurring in their jurisdictions: criminal homicide, forcible rape, robbery, aggravated assault, burglary, larceny-theft, motor vehicle theft, and arson. The FBI assembles and publishes the data and distributes them to contributing local agencies, state UCR programs, and others interested in the nation’s crime problems. CDC homicide data indicate that from 1995 through 1999, homicide was the second leading cause of death for women aged 15 to 24, after accidents. CDC data also show that almost 2,600 women of childbearing age (15 through 44) were homicide victims in 1999. BJS reported that intimate partner homicides accounted for about 11 percent of all murders nationwide in that year. Seventy-four percent of these murders (1,218 of 1,642) were of women. About 32 percent of all female homicide victims were murdered by an intimate partner, in comparison to about 4 percent of all male homicide victims. There is no current national estimate of the prevalence of violence against pregnant women. Estimates that are currently available cannot be generalized or projected to all pregnant women. CDC’s PRAMS develops statewide estimates of the prevalence of violence for women whose pregnancies resulted in live births; 1998 estimates for 15 participating states ranged from 2.4 percent to 6.6 percent. Research on whether women are at increased risk for violence during pregnancy is inconclusive. However, CDC reported that study findings suggest that, for most abused women, physical violence does not seem to be initiated or to increase during pregnancy. National data are also not available on the number of pregnant homicide victims, and such data at the state level are limited. The two federal agencies collecting homicide data, the FBI and CDC, do not identify the pregnancy status of homicide victims. CDC is exploring initiatives that could result in better data on homicides of pregnant women. There is no current national estimate measuring the prevalence of violence during pregnancy—that is, the proportion of pregnant women who experience violence. Some state- and community-specific estimates are available, but they cannot be generalized or projected to all pregnant women. CDC developed PRAMS, an ongoing population-based surveillance system that generates state-specific data on a number of maternal behaviors, such as use of alcohol and tobacco, and experiences—including physical abuse—before, during, and immediately following a woman’s pregnancy. CDC awards grants to states to help them collect these data. The number of states that participate in PRAMS has increased since its inception. Five states and the District of Columbia participated in fiscal year 1987 and 32 states and New York City participated in fiscal year 2001. CDC officials reported that lack of funds has prevented additional states from being added; six states were approved for participation in PRAMS but were not funded in 2002. CDC’s goal is to fund all states that want the surveillance system. The estimated 1998 PRAMS prevalence rates of physical abuse by husband or partner during pregnancy, which CDC reported for 15 states, ranged from 2.4 percent to 6.6 percent. (See app. II for PRAMS prevalence estimates for the 15 participating states and a description of PRAMS’s methodology.) States participating in PRAMS use a consistent data collection methodology that allows for comparisons among states, but it does not allow for development of national estimates because states participating in PRAMS were not selected to be representative of the nation. In addition, PRAMS data cannot be generalized to all pregnant women because they represent only those women whose pregnancies resulted in live births; the data do not include women whose pregnancies ended with fetal deaths or abortions or women who were victims of homicide. PRAMS is based on self-reported data and, because some women are unwilling to disclose violence, the findings may underestimate abuse. Studies have also estimated the prevalence of violence within certain states and communities and among narrowly defined study populations. These estimates lack comparability and cannot be generalized or projected to all pregnant women. Many of the studies do not employ random samples and are disproportionately weighted toward specific demographic or socioeconomic populations. Most of the 11 such studies we reviewed, which were published from 1998 through 2001, found prevalence rates of violence during pregnancy ranging from 5.2 percent to 14.0 percent. In a CDC-sponsored 1996 review of the literature, the majority of studies reported prevalence levels of 3.9 percent to 8.3 percent. The variability in estimates could reflect differences in study populations and methodologies, such as differences in how violence is defined, the time period used to measure violence, and the method used to collect the data. Research on whether being pregnant places women at increased risk for violence is inconclusive. CDC reported that additional research is needed in this area, but that current study findings suggest that for most abused women, physical violence does not seem to be initiated or to increase during pregnancy. Although some women experience violence for the first time during pregnancy, the majority of abused pregnant women experienced violence before pregnancy. In one study we reviewed, only 2 percent of women who reported not being abused before pregnancy reported abuse during pregnancy. The same study also found that, for some women, the period of pregnancy may be less risky, with violence abating during pregnancy; 41 percent of the women who reported abuse in the year before pregnancy did not experience abuse during pregnancy. Studies have found other factors to be associated with violence during pregnancy, including younger age of the woman, lower socioeconomic status, abuse of alcohol and other drugs by victims and perpetrators of violence, and unintended pregnancy. To increase the generalizability of research on the prevalence and risk of violence to women during pregnancy, researchers have reported the need for more population-based studies that would allow for comparisons of pregnant and nonpregnant women. These studies would draw their samples from all pregnant women, not just those receiving health care or giving birth, as well as nonpregnant women. Such research could indicate whether pregnant women are at increased risk for violence compared to their nonpregnant counterparts. Researchers have also suggested using methodologies that consistently define and measure the prevalence of violence. A recent report by the Institute of Medicine on family violence recommended that the Secretary of HHS establish new, multidisciplinary education and research centers to, among other things, conduct research on the magnitude of family violence and the lack of comparability in current research. There is also little information available on violence against pregnant women that results in homicide. The FBI and CDC are the two federal agencies that collect and report information on homicides nationwide; however, neither agency collects data on whether female homicide victims were pregnant or recently pregnant. According to CDC, 17 states, New York City, and Puerto Rico collect data related to pregnancy status on their death certificates, but the data collected are not comparable. Included in these data are victims who may not have been pregnant at the time of death but had been “recently” pregnant; in addition, states’ criteria for recent pregnancy ranged from 42 days to 1 year after birth. (See app. III for a list of the questions on pregnancy status that states include on their death certificates.) The ability to identify pregnant homicide victims from death certificates is limited. While there are questions on some states’ death certificates regarding pregnancy status, officials in the four states we contacted (Illinois, Maryland, New Mexico, and New York) told us that these data are incomplete and may understate the number of pregnant homicide victims. For example, if the pregnancy item on the death certificate is left blank, there is no way to easily determine whether an autopsy, if conducted, included a test or examination for pregnancy. Moreover, researchers have reported that physicians completing death certificates after a pregnant woman’s death failed to report that the woman was pregnant or had a recent pregnancy in at least 50 percent of the cases. To address these limitations, all four states we contacted are making efforts to compare death certificate data with other datasets and records— such as medical examiners’ reports—to identify pregnant or recently pregnant homicide victims. They told us that they are reviewing the data in order to determine if there is something they can do to prevent violent deaths of pregnant women or help women who are victimized. For example, the Maryland medical examiner’s office conducted a study of the deaths of females aged 10 to 50 to determine if these women were pregnant when they died. Several sources of data—death certificates, medical examiners’ reports, and recent live birth and fetal death records— from a 6-year period were linked. Of the 247 women who were identified as pregnant or recently pregnant, 27 percent were identified through examining cause of death information on death certificates. The remaining 73 percent were identified by matching the woman’s death certificate with recent birth and fetal death records and by reviewing data from medical examiners’ records, such as autopsy reports or police records. Similarly, New York officials determined through dataset links (death certificates, fetal death records, recent birth certificates, and hospital discharge records) that, in 1997, 9 of 174 female homicide victims aged 10 to 54 were pregnant or recently pregnant at the time of death, rather than the 1 of 174 that death certificate data alone would have indicated. Officials from New York and Maryland told us these efforts to link datasets are dependent on records being computerized. Some state officials also told us they did not have the resources to conduct these analyses on a continuing basis. There are two federal initiatives under development that propose to collect data on the number of homicides of pregnant women. CDC is proposing a revision of the U.S. standard certificate of death used for the National Vital Statistics System to include five categories related to pregnancy status. (See fig. 1.) Each state has the option of adopting the U.S. standard certificate for its death certificate or excluding or adding data elements. If the revision is approved, CDC expects several states to implement it in 2003, with an increasing number using it each year. CDC is also beginning to implement the National Violent Death Reporting System (NVDRS), which, as currently envisioned, would collect data that could determine the number of pregnant homicide victims. CDC plans to collect data from a variety of state and local government databases on deaths resulting from homicide and suicide. Like the Maryland and New York efforts, NVDRS would link several databases, such as death and medical examiners’ records, to identify pregnant homicide victims. According to CDC, implementation of NVDRS depends on future funding; full implementation would take at least 5 years. The estimated federal cost of this system is $10 million in start-up costs and $20 million in annual operating costs; these estimates primarily consist of expenditures for providing technical assistance to the states and funding for state personnel to collect the data. Violence prevention strategies for both pregnant and nonpregnant women include measures to prevent initial incidents of violence, such as educating women about warning signs of abuse, and intervention activities that identify and respond to violence after it has occurred. Typically, the initial component of an intervention is screening, or asking women about their experiences with violence. Many health care organizations and providers recommend routine screening for intimate partner violence. Studies have found, however, that fewer than half of physicians routinely screen for violence during prenatal visits. Reasons for physicians’ reluctance to screen include lack of training on how to screen and how to respond if a woman discloses violence. Violence prevention strategies also include criminal justice measures, which focus on apprehending, sentencing, incarcerating, and rehabilitating batterers. Little information is available on the effectiveness of violence prevention strategies and programs. Researchers have reported the need for evaluations of the effectiveness of screening protocols and batterer intervention programs. Measures to prevent violence against pregnant women are similar to those to prevent violence against all women. Public health violence prevention programs can include primary prevention measures to keep violence from occurring in the first place and interventions that ask women about their experiences with violence and respond if violence has occurred. Criminal justice strategies to prevent violence against women focus on apprehending, sentencing, incarcerating, and rehabilitating batterers. Efforts to prevent initial incidents of violence concentrate on attitudes and behaviors that result in violence against women. These efforts include educating children, male and female, about ways to handle conflict and anger without violence and social norms about violence, such as attitudes about the acceptability of violence toward women. They also include training parents, police officers, and other community officials to be resources for youth seeking assistance about teenage dating violence. Primary prevention efforts also have been targeted to pregnant women. For example, the Domestic Violence During Pregnancy Prevention Program in Saginaw, Michigan, provided 15-minute counseling sessions to pregnant women who reported that they had not experienced violence. Women were educated about intimate partner violence and given tools and information to help prevent abuse in their lives, including information on behaviors typical of abusive men, warning signs of abuse, and community resources. Interventions to deal with violence that has occurred are designed to identify victims and to prevent additional violence through such actions as providing an assessment of danger, developing a safety plan, and providing information about and referral to community resources. For example, HRSA has funded a demonstration program to develop or enhance systems that identify pregnant women experiencing intimate partner violence and provide appropriate information and links to services. The HRSA program funds four projects; each project is funded at $150,000 a year for 3 years. Screening for the presence of violence is generally the initial component of intervention efforts to prevent additional violence against pregnant women. Many experts view the period of pregnancy as a unique opportunity for intervention. Pregnant women who receive prenatal care may have frequent contact with providers, which allows for the development of relationships that may facilitate disclosure of violence. For example, the American College of Obstetricians and Gynecologists (ACOG) recommends that physicians screen all patients for intimate partner violence and that screening for pregnant women occur at several times over the course of their pregnancies. Some women do not disclose abuse the first time they are asked, or abuse may begin later in pregnancy. Some of the barriers to women’s disclosure of violence are fear of escalating violence, feelings of shame and embarrassment, concern about confidentiality, fear of police involvement, and denial of abuse. In addition, some health care officials told us that the period of pregnancy may be a difficult time for a woman to leave or take action against the abuser because of financial concerns and pressures to provide the child with a father. Studies have found that fewer than half of physicians routinely screen women for violence during pregnancy. For example, a survey of ACOG fellows reported that 39 percent of respondents routinely screened for violence at the first prenatal visit. The study found that screening was more likely to occur when the obstetrician-gynecologist suspected a patient was being abused. Another study that surveyed primary care physicians who provide prenatal care found that only 17 percent of respondents routinely screened at the first prenatal visit and 5 percent at follow-up visits. Across the 15 states with PRAMS data for 1998, from 25 percent to 40 percent of women reported that a physician or other health care provider talked to them about intimate partner violence during any of their prenatal care visits. CDC and providers of prevention services have reported that reasons for physicians’ reluctance to screen women for violence include lack of time and resources, personal discomfort about discussing the topic, concern about offending patients, belief that asking invades family privacy, and frustration with patients who are not ready to leave or who return to their abusers. Lack of training and education on how to screen for intimate partner violence and lack of knowledge about what to do if a woman reports experiencing intimate partner violence have also been cited as barriers to physician screening. In its report on family violence, the Institute of Medicine stated that health professionals’ training and education about family violence are inadequate and recommended that the Secretary of HHS establish education and research centers to develop training programs that prepare health professionals to respond to family violence. Criminal justice approaches to preventing violence against women include apprehending and sanctioning the batterer, preventing further contact between the abuser and the victim, and connecting the victim to community services. In addition, batterer intervention programs, which have existed for over 20 years as a criminal justice intervention, are often used as a component of pretrial or diversion programs or as part of sentencing. Batterer programs can include classes or treatment groups, evaluation, individual counseling, or case management; their goals are rehabilitation and behavioral change. To assist communities, policymakers, and individuals in combating violence against women, the National Advisory Council on Violence Against Women and VAWO developed a Web-based resource for instruction and guidance. These guidelines include recommendations for strengthening prevention efforts and improving services and advocacy for victims. For example, the guidelines recommend that communities increase the cultural and linguistic competence of their sexual assault, intimate partner violence, and stalking programs by recruiting and hiring staff, volunteers, and board members who reflect the composition of the community the program serves. The guidelines also recommend that all health and mental health care professional school and continuing education curricula include information on the prevention, detection, and treatment of sexual assault and intimate partner violence. Researchers have reported that little information is available on the effectiveness of strategies to prevent and reduce violence against women. For example, many health care organizations and providers advocate routine screening of pregnant women for intimate partner violence, but questions have been raised about the effectiveness of screening, the most effective way to conduct screening, and the optimal times for conducting screening. In addition, limited information is available on the impact of screening on women and their children. A CDC official told us that CDC has not issued guidelines or recommendations related to routine screening for violence in health care settings, primarily due to the lack of scientific evidence about the effectiveness of screening. CDC recently funded a cooperative agreement to measure the effectiveness of an intimate partner violence intervention that includes evaluation of a screening protocol and computerized screening. The results of the study are expected to provide data on the array of outcomes that need to be considered in implementing intervention programs to decrease intimate partner violence. CDC officials told us that additional studies are necessary to evaluate screening and intervention strategies and that CDC is in the process of identifying additional study topics and designs that could complement this effort. CDC and other researchers on violence against women and providers of prevention services have identified several other areas in which research could be fruitful. For example, they have reported the need to develop information on the most effective ways to promote women’s develop and evaluate the effectiveness of programs that coordinate community resources from the medical, social services, law enforcement, judicial, and legal systems; and develop and evaluate the effectiveness of prevention strategies that incorporate cultural perspectives in serving ethnic and immigrant populations. An example of an effort to conduct such research is HRSA’s program to improve interventions for pregnant women experiencing violence; however, the projects’ evaluation components are small and, according to HHS, their results may not be generalizable to the nation. Each funded project will evaluate whether its intervention was effective in improving rates of screening, assessment, and referral or links to community services; the projects may also assess the impact of the intervention on women’s behaviors. For example, the Comprehensive Services program in Baltimore is assessing whether the project was effective in linking families to needed services and whether women report improvement in their physical or psychosocial status after the intervention. The Systems for Pregnancy Education and Awareness of Safety in New York is evaluating whether the project increases the number of women who disclose violence and receive services and referrals to community services, such as shelters. The Perinatal Partnership Against Domestic Violence in Seattle is evaluating the effectiveness of screening protocols and interventions that are tailored to the culture and values of women who are Asian and Pacific Islanders. Researchers have also reported that there is little evaluative information on the effectiveness of violence prevention programs for batterers. A VAWO-funded study of the effectiveness of batterer programs concluded that they have modest effects on violence prevention when compared with traditional probationary practices and that there is little evidence to support the effectiveness of one batterer program over another in reducing recidivism. The study concluded, however, that batterer programs are a small but critical element in an overall violence prevention effort that includes education, arrest, prosecution, probation, and victim services. The study authors advocated experimenting with different program approaches and performing outcome evaluations of batterer programs. The magnitude of the problem of violence against pregnant women is unknown. Current collaborative efforts by federal and state governments to gather and analyze more complete and comparable data could improve policymakers’ knowledge of the extent of this violence and guide future research and resource allocation. These efforts can also help in setting priorities for prevention strategies. Continuing evaluation of prevention strategies and programs could help identify successful approaches for reducing violence against women. We provided a draft of this report to Justice and HHS for comment. Justice informed us that it did not have any comments. HHS agreed with our finding that limited information is available regarding violence against pregnant women. HHS also noted reasons why the data are incomplete, such as the difficulty of collecting data from a representative sample of pregnant victims because they are such a small percentage of the U.S. population. Other reasons HHS cited are legal and ethical issues in conducting research on this population, such as maintaining privacy and confidentiality. HHS commented that several states are conducting mortality reviews to better understand pregnancy-related deaths and their underlying causes. HHS raised several issues that it considers important regarding violence against women, such as the need to evaluate factors correlated with violence against women, and identified additional efforts within the department that focus on intimate partner violence. We recognize that there are many issues and efforts related to violence against women; however, our focus was on violence against pregnant women, and therefore much of our discussion relates to this population. HHS noted that although HRSA’s demonstration program to improve interventions for pregnant women experiencing violence will result in new qualitative information, the evaluation component is small and the findings would likely be limited. We modified our discussion of this program to indicate that it is a small demonstration program and its results may not be generalizable to the nation. In response to HHS’s comments, we added a description of another demonstration program focused on violence against pregnant women that HRSA plans to initiate in June 2002. HHS also provided technical comments, which we incorporated where appropriate. (HHS’s comments are reprinted in app. IV.) As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. We will then send copies to the Secretary of Health and Human Services; the Attorney General; the Administrator of the Health Resources and Services Administration; the Directors of the Centers for Disease Control and Prevention, National Institutes of Health, Office of Justice Programs, and Federal Bureau of Investigation; appropriate congressional committees; and others who are interested. We will also make copies available to others on request. If you or your staff have any questions, please contact me at (202) 512-8777 or Janet Heinrich, Director, Health Care—Public Health Issues, at (202) 512-7119. Additional GAO contacts and the names of other staff members who made contributions to this report are listed in appendix V. To do our work, we interviewed and obtained information from officials at the Department of Health and Human Services’ Centers for Disease Control and Prevention (CDC), Health Resources and Services Administration (HRSA), and National Institutes of Health, and the Department of Justice’s Office of Justice Programs (OJP) and Federal Bureau of Investigation (FBI). We also interviewed representatives of and obtained information from the American College of Obstetricians and Gynecologists, Institute of Medicine, Family Violence Prevention Fund, National Coalition Against Domestic Violence, and National Association of Medical Examiners; several state domestic violence coalitions; and researchers. To determine the availability of information on the prevalence and risk of violence against pregnant women, we reviewed literature on the prevalence and risk of violence to women during pregnancy. We identified 11 studies published since 1998 that contained prevalence estimates and assessed their methodologies to ensure the appropriateness of the data collection and analysis methods and the conclusions. We also interviewed CDC officials and reviewed data collected through CDC’s Pregnancy Risk Assessment Monitoring System (PRAMS). To determine the availability of data on the number of pregnant women who are victims of homicide in the United States, we interviewed officials and collected and analyzed homicide statistics and reports from CDC, the FBI, and OJP’s Bureau of Justice Statistics. We also interviewed officials from state departments of health and vital statistics in Illinois, Maryland, New Mexico, and New York to determine how they collect and use data on pregnant homicide victims. We selected these states because, in addition to collecting pregnancy data on their state death certificates, they are active in collecting and analyzing information from various sources to study maternal health issues. The states were not intended to be representative of all states. We also interviewed and obtained information from CDC and Justice officials to identify federal initiatives that are under way to improve the availability of information on homicides of pregnant women. To identify strategies and programs to prevent violence against pregnant women, we gathered information through a literature review and interviews with and information collected from researchers and officials from federal agencies, health care associations, and advocacy groups. We reviewed a HRSA-funded program (with projects located in Illinois, Maryland, New York, and Washington) and two other programs (located in Michigan and Pennsylvania) because they focused specifically on violence against pregnant women and served varied populations, including adolescents, diverse ethnic groups, and women with substance abuse problems. We conducted our work from July 2001 through April 2002 in accordance with generally accepted government auditing standards. CDC developed PRAMS, a population-based survey of women whose pregnancies resulted in live births. CDC awards grants to states to help them collect information on women’s experiences and behaviors before, during, and immediately following pregnancy. CDC funded about $6.2 million for PRAMS in fiscal year 2001; grant awards to states ranged from $100,000 to $150,000. CDC’s funding for PRAMS also includes costs for CDC staff and contractors to provide technical support to the states. States participating in PRAMS use a consistent methodology to collect data. Each state selects a stratified sample of new mothers every month from eligible birth certificates and then collects data through mailings and follow-up telephone calls to nonrespondents. A birth certificate is eligible for the PRAMS sample if the mother was a resident of the state. For 1998, the most recent year for which CDC has reported comprehensive data for PRAMS, states used a standardized questionnaire that asked women if their husbands or partners physically abused them during their most recent pregnancy. PRAMS defined physical abuse as pushing, hitting, slapping, kicking, or any other way of physically hurting someone. Table 1 lists 1998 PRAMS estimates of the prevalence of intimate partner violence during pregnancy. Question Was there a pregnancy in last 90 days or 42 days? If female, was there a pregnancy in the past 3 months? If female, indicate if pregnant or birth occurred within 90 days of death. If female, was there a pregnancy in the past 3 months? Was decedent pregnant or 90 days postpartum? If female, was there a pregnancy in the past 12 months? If deceased was female 10-49, was she pregnant in the last 90 days? Indicate if the decedent was pregnant or less than 90 days postpartum at time of death. If female, was decedent pregnant in the past 12 months? If deceased was female 10-49, was she pregnant in the last 90 days? If female, was there a pregnancy in the past 3 months? If female, was she pregnant at death or any time 90 days prior to death? Was decedent pregnant within last 6 weeks? If female, was decedent pregnant in last 6 months? If female under 54, pregnancy in last 12 months? Was deceased pregnant within 18 months of death? If female, was deceased pregnant? Was decedent pregnant at time of death; within last 12 months? If female, was there a pregnancy in last 3 months? In addition to those named above, contributors to this report were Janina Austin, Nancy Kawahara, Emily Gamble Gardiner, Geoffrey Hamilton, Anthony Hill, Hiroshi Ishikawa, Alice London, Behn Miller, and Sara-Ann Moessbauer.
The Violence Against Women Act funds programs that shelter battered women, training for law enforcement officers and prosecutors, and research on violence against women. Available data on the number of pregnant women who are victims of violence are incomplete and lack comparability. There is no current national estimate of the prevalence of violence against pregnant women. Available estimates cannot be generalized to all pregnant women, and little information is available on the number of pregnant homicide victims. Health and criminal justice officials have designed multiple strategies to prevent violence against women, but their effect is unknown. Strategies to prevent violence against pregnant women are similar to those to prevent violence against all women and include public health efforts to prevent violence in the first place, intervention activities that identify and respond to violence after it occurs, and criminal justice strategies that focus on incarcerating or rehabilitating batterers.
DOD’s Military Health System has two missions: supporting wartime and other deployments, and providing peacetime health care. In fiscal year 2011, DOD offered health care services to about 9.7 million eligible beneficiaries in the United States and abroad through TRICARE, the Military Health System’s regionally structured health care program. Under TRICARE, beneficiaries may obtain care either through DOD’s direct care system of MTFs, or through DOD’s purchased care system of civilian providers. The total DOD health care budget for fiscal year 2011 was $52.45 billion, of which $17.76 billion was to provide health care through the direct care system of MTFs. Of the $17.76 billion in direct care costs, DOD spent about $1.91 billion contracting for various medical services, including about $1.14 billion for contract health care professionals, the primary focus of this report. Figure 1 below shows the total DOD health care budget, the amount spent on direct health care, and the amount the military departments spent on contracts for health care professionals working in MTFs in the United States in fiscal year 2011. The Assistant Secretary of Defense for Health Affairs (Health Affairs) is the principal advisor for all DOD health policies and programs. This office issues policies, procedures, and standards that govern the management of DOD medical programs and has the authority to issue DOD instructions, publications, and memorandums that implement policy approved by the Secretary of Defense or the Under Secretary of Defense for Personnel and Readiness. However, this office does not have direct command and control of the military departments’ MTFs. TMA, under the authority and direction of Health Affairs, is responsible for awarding, administering, and managing DOD’s contracts for purchased care, including the regional managed care support contracts. See figure 2 for the current organizational structure of DOD’s Military Health System. Under the direct care system, each military department recruits, trains, and funds its own medical personnel to administer medical programs and provide medical services to beneficiaries. The Departments of the Army and the Navy each has a medical command, headed by a surgeon general, who manages each department’s MTFs and other activities through a regional command structure. Within these medical commands, the Army and Navy have separate but similarly centralized approaches to contracting for medical services, including health care professionals. The Army acquires medical services through the Health Care Acquisition Activity, which has a main contracting center and five regional contracting offices. The Naval Medical Logistics Command is in charge of providing contracting support for medical services for both the Navy and the Marine Corps. Though similar in his role as medical advisor to the Air Force Chief of Staff, the Air Force Surgeon General exercises no command authority over Air Force MTFs. The Air Force does not have a medical contracting command like the other two services. Instead, the Air Force has a decentralized contracting structure and relies on more than 60 separate local base contracting offices to acquire medical services. An additional medical organizational structure—JTF CapMed—was established in 2007 to manage MTFs within the National Capital Region and to execute actions required under the Base Realignment and Closure (BRAC) process. JTF CapMed is responsible for the management of the Walter Reed National Military Medical Center in Bethesda, Maryland, which was created by combining Walter Reed Army Medical Center and the National Naval Medical Center; and Ft. Belvoir Community Hospital, which replaced DeWitt Army Community Hospital at Ft. Belvoir, Virginia. JTF CapMed relies on the Army to award contracts for health care professionals because it does not have its own contracting authority. Figure 3 depicts the size and location of MTFs in the United States. A variety of contracting arrangements are available to DOD to contract for health care professionals. These contracting arrangements are subject to the Federal Acquisition Regulation (FAR)—the primary regulation for use by all federal executive agencies in their acquisition of goods and services. Table 1 lists some of the different contracting arrangements used to contract for health care professionals, including multiple-award contracts. The military departments contract for many different types of health care professionals. For example, they often contract for nurses, family practice doctors, and medical assistants, among others. The typical process for contracting for these types of professionals is as follows: Once it has been determined that a staffing requirement needs to be fulfilled through a contractual agreement, the acquisition strategy is developed. The strategy addresses the type of contracting arrangement that should be used, the payment terms to use, and the competition requirements. A contracting officer—who is a federal employee with the authority to enter into, administer, and/or terminate contracts—awards a contract. A contracting officer’s representative (COR) is assigned to oversee the contract and ensure that the contractor is performing in accordance with the standards and terms that are set forth in the contract. If problems with a contractor’s performance arise, the COR serves as the contract focal point between the contracting officer and contractor. All three military departments used competition and fixed-price contracts for a majority of their medical services contract obligations in fiscal year 2011. Together, the three military departments most often use multiple- award contracts to contract for health care professionals. Military department analyses indicate that multiple-award contracts may result in lower costs compared to some other contract arrangements. In addition, the military departments use other contract arrangements, such as clinical support agreements (CSA) to fill requirements in a remote location or for a particular health care specialist. The military departments obligated $1.91 billion for medical services in fiscal year 2011.This figure includes $1.14 billion for contract health care professionals as well as other medical services contract obligations within and outside of MTFs worldwide. Of the $1.91 billion, the military departments used competition for approximately 75 percent of obligations when contracting for medical services. Federal regulations generally require the use of full and open competition, which can help to reduce costs. We have previously reported that competition is a critical tool for achieving the best value. Table 2 shows the percentage of each military department’s direct health care medical services obligations that were competed in fiscal year 2011. We also found that fixed-price contracts were used for more than 90 percent of direct health care medical service obligations in fiscal year 2011, as shown in table 3. Generally, under a fixed-price contract for services, the government pays a certain amount for the services specified. We previously have reported that this type of contract generally results in the least amount of risk to the government. We found that the Army, Navy, Air Force, and JTF CapMed together used multiple-award contracts for 64 percent of the $1.14 billion in obligations for contracts for health care professionals in fiscal year 2011. CSAs accounted for only 3 percent of the obligations spent on health care professionals in the same time period. Figure 4 shows the percentage of fiscal year 2011 obligations for health care professionals by contract arrangement. Because multiple-award contracts are competitive, they avail the military departments of one of the most fundamental and cost effective tools in contracting. Competition is the cornerstone of a sound acquisition process and a critical tool for achieving the best return on investment for taxpayers. Officials from the military departments told us that they use multiple-award contracts for many reasons. In addition to perceived cost effectiveness, officials stated that multiple-award contracts result in shorter acquisition lead times and reduce the risk of bid protests. Officials also stated they use multiple-award contracts because these contracts are awarded to small businesses, which helps the military departments meet their small business contracting goals. In fact, 100 percent of the current Army, Navy, and Air Force multiple-award contracts are awarded to small businesses. DOD officials also stated that multiple- award contracts have facilitated the streamlining of acquisitions and the standardization of contract requirements, which saves time and contract administration costs. During our review, Navy and Air Force officials completed analyses of their contracting arrangements which indicated that multiple-award contracts may result in lower costs when contracting for health care professionals compared to CSAs. Specifically, Navy officials conducted an analysis comparing the hourly costs associated with the same type of health care professional for each of three different contract arrangements—multiple-award contracts, individual set-aside (ISA) contracts, and CSAs. Based on this analysis, the Navy determined that the hourly rate of providers contracted via CSAs was higher than under multiple-award contracts for the same types of services. Similarly, Air Force officials told us and provided analysis indicating that CSAs cost more than multiple-award contracts. In addition, the Air Force conducted a separate analysis to determine cost savings on their multiple-award contracts. Based on this analysis, an Air Force official stated that they have realized $13.8 million, or 15 percent, in savings on their current multiple-award contracts—which took effect in December 2011— compared to their previous multiple-award contracts. Each military department has the flexibility to employ a variety of other contract arrangements to meet its needs for health care professionals. For example, Navy officials stated they sometimes contract directly with an individual health care professional using an ISA when an individual’s qualifications will be used as the primary selection criterion and cost of the contract is not as important as the provider’s qualifications. Navy officials stated they also use ISAs because they are more likely to be able to pay a health care specialist a competitive market-based salary if they contract with the provider directly instead of contracting with a staffing company that typically adds 15-20 percent in overhead costs. A DOD Inspector General report stated that ISAs may be appropriate in certain circumstances, such as acquiring a scarce specialty. One drawback of using an ISA is the amount of time it takes to award a contract. According to Navy officials, it can take 9-12 months for a contract to be awarded. If the health care professional on the contract decides to leave, the military departments are left with an unfulfilled requirement while a new contract is developed, solicited, and awarded. A CSA is another contracting arrangement that can be used to acquire health care professionals. Similar to ISAs, CSAs are often used to contract for hard-to-fill positions for health care specialists, especially in remote locations. Although the Navy used CSAs in the past to fill requirements for health care professionals, Navy officials stated that they currently do not use CSAs because this contract arrangement is less cost effective and provides for less competition. Although CSAs were reported by the military departments to be more expensive than multiple-award contracts, TMA officials stated that they have been used to fill positions when other contracting arrangements have been unsuccessful. Regardless of the contract arrangement, some DOD officials told us that it is challenging to fill the requirements for many of the highly skilled health care professionals they need to work at MTFs. For instance, in 2012, multiple-award contracts were competitively awarded to staffing companies specifically for the National Capital Region. Some of the task orders on these multiple-award contracts had to be terminated when the staffing companies could not recruit incumbent health care professionals whose previous salaries had been well above prevailing market rates. This resulted in insufficient time for them to recruit new employees and complete the hiring process without causing gaps in service. Air Force officials also experienced similar challenges in their implementation of multiple-award contracts. For example, some of their multiple-award task orders were terminated because the staffing companies were unable to fill the requirements based on the contracted prices they had proposed. Contracting for health care staffing requirements across the military departments remains largely fragmented. In the absence of an agency- wide strategy, the military departments have attempted to consolidate some staffing requirements, but these efforts have been limited. Over the last 9 years, various DOD groups as well as GAO have recommended that DOD take steps toward such a strategy, but DOD still does not have an agency-wide acquisition strategy to consolidate these requirements. Studies and reports by GAO and others have identified challenges with the fragmented approach that the military departments take to contract for medical services. For example, in 2004, a DOD Inspector General report found that the Military Health System could better coordinate contracting efforts and reduce duplication and fragmentation among DOD contracting organizations that acquire medical services. The report called for a joint and strategic enterprise approach to medical services acquisition. In 2005, a DOD-wide council convened by the Assistant Secretary of Defense for Health Affairs recommended that DOD identify an alternative to the existing approach for acquiring direct care medical services, and suggested the need for a joint process and joint contracting centers responsible for the coordination, development, and contract execution of medical services acquisitions. This council also recommended that DOD establish strategic sourcing councils to develop strategies for sourcing key labor categories, including nurses and radiologists, and collect standardized aggregate procurement data across military departments. Strategic sourcing involves a shift away from numerous individual procurements to a broader aggregate approach, and often results in cost savings. Our prior work found that success in this regard requires the commitment of senior management, as well as reliable and detailed agency-wide spending data to identify opportunities to leverage buying power, reduce costs, and better manage suppliers. In 2007, DOD drafted a charter for a Defense Medical Strategic Sourcing Council. The council’s charter stated that its goals were to allow DOD to standardize the professional services acquisition process, further decrease variation in unit cost for services, and reduce acquisition workload. However, according to a TMA official, the military departments never signed the charter, and the council was never convened. GAO reported in July 2010 that DOD would benefit from enhanced collaboration among the military departments in their processes for determining professional medical services requirements and recommended that DOD identify, develop, and implement joint medical personnel standards for shared services. While DOD concurred with our recommendation, as of March 2013, no action has been taken to address it. In our March 2011 report on opportunities to reduce duplication, overlap, and fragmentation in government programs, we noted that consolidating common administrative, management and clinical functions within the Military Health System could increase efficiencies and significantly reduce costs, but that DOD had taken only limited actions in this area. In June 2011, the Deputy Secretary of Defense established a Task Force to review various options for changes to the overall governance structure of the Military Health System and of its multi-service medical markets. The Task Force identified 13 potential governance options for the Military Health System. DOD selected an option for Military Health System governance that would create a defense health agency in part to assume the responsibility for creating and managing shared services, and leave the military chain of command intact with the military departments in control of their military treatment facilities. This option would include a shared services concept to consolidate common services, including acquisition, under the control of a single entity. The Deputy Secretary of Defense stated in a March 2012 memo that DOD recognizes that there are opportunities to achieve savings in the Military Health System through the consolidation and standardization of many shared services, including but not limited to pharmacy programs, medical education and training, health information technology, budget and resource management, and acquisitions. While DOD is moving forward incrementally with its plans to transform the Military Health System structure and set up the defense health agency, decisions about the consolidation of health care staffing requirements remain outstanding. For example, DOD established a medical services contracting subworking group in 2012. According to DOD officials, the group is in the process of examining issues related to medical services acquisition and anticipates briefing out its recommended courses of action within DOD in the July 2013 time frame. Its potential recommendations include three different approaches to realigning and potentially consolidating the responsibility for medical services acquisitions within DOD. Additionally, in the National Defense Authorization Act for Fiscal Year 2013, Congress included a requirement for DOD to provide an implementation plan for its governance reforms, including goals, timeframes, and estimated savings, among other things. In the absence of an agency-wide approach for medical services acquisition, there have been only limited instances of the consolidation of health care staffing requirements. Some of these instances have involved efforts across the military departments. For example, the joint contracts that were reported to us accounted for approximately 8 percent of the $1.14 billion in obligations for health care professionals in fiscal year 2011. Other efforts have involved actions within the departments. For example, the departments have made efforts to use multiple-award contracts to consolidate intraservice staffing requirements, but we identified several instances where multiple task orders were placed for the same type of provider in the same area or facility. The military departments have consolidated a limited number of staffing requirements by developing contracts used at joint facilities such as those in San Antonio and the National Capital Region. All told, the joint contracts that were reported to us make up approximately 8 percent of the $1.14 billion in obligations for health care professionals in fiscal year 2011. In 2009, the Army established two contracts for nurses to work at the San Antonio Military Medical Center, which operates as a joint facility for the Army and the Air Force. Army officials explained that, prior to this joint effort, there were more than 12 contracts with nursing requirements between Brooke Army Medical Center and the Air Force’s Wilford Hall Medical Center, both in San Antonio. The multiple contracts created competition between the two military departments’ facilities for nursing staff. Because of the BRAC-related realignment of medical services in San Antonio, the Army was able to consolidate the nursing requirements and use one multiple-award contract for registered nurses and one for licensed vocational nurses to provide nursing services at both facilities. According to Army officials, these contracts are more successful than the previous contracts in placing the necessary number of nurses in the MTFs in San Antonio, and less administrative oversight is needed. In the National Capital Region, the Army had multiple-award contracts for health care professionals that were awarded prior to the transfer of control to JTF CapMed. The Army then began to award contracts for JTF CapMed facilities in 2012. These contracts are considered to be joint because these facilities are used by more than one military department as a result of the BRAC process. According to an Army analysis, these contracts resulted in a 14 percent savings over the previous set of contracts for health care professionals at the Walter Reed National Military Medical Center in Bethesda. The Navy also used multiple-award contracts for health care professionals for MTFs in the National Capital Region that were awarded prior to the transfer of control to JTF CapMed at the end of fiscal year 2011. Other than the contracts that were in place for these MTFs before August 2011, the Navy stated that it did not have any additional contracts that were used by other military departments. The Air Force stated that it has not awarded any contracts with jointly developed requirements, but Air Force multiple-award contracts are open to use by other military departments to support joint MTFs. While examples of joint contracting efforts in place during fiscal year 2011 were limited, additional contracts available to more than one military department have been awarded since then, or are planned. For example, seven joint medical service contracting initiatives were planned by the Army, including a new contract for medical services in Europe that would be available to all three military departments. None of these has been awarded as of February 2013. The military departments have made efforts to consolidate some staffing requirements within their own MTFs using multiple-award contracts. Currently, multiple-award contracts in the Army and Navy are generally set up by U.S. geographical region and by provider type to meet the requirements of more than one facility. For example, in each geographical region, the Army and the Navy each have multiple-award contracts for nurses, and one in each region for doctors. In 2012, the Navy had 6 multiple-award contracts on the west coast and 5 on the east coast, including many types of health care professionals. The Navy routinely receives 25 to 40 proposals, and usually makes 3 to 6 awards to health care staffing companies. Navy officials told us that before multiple-award contract use was prevalent, buying activity was more fragmented. More individual contracts—for specific labor categories and locations—made the burden much greater in terms of administration and oversight. Officials explained that, at one point, the organization responsible for medical services acquisitions was funded on the basis of how many contracts were awarded, which incentivized inefficiency. In contrast, the Air Force uses multiple-award contracts that are set up nationally to be used by all of its MTFs, and these contracts also include many types of health care professionals. The Army awarded national contracts for health care professionals in fiscal year 2003, but officials said this approach was unsuccessful because not enough companies were able to compete for those contracts and provide health care staffing services on a national scale. Market research and feedback from contractors indicated that regional multiple-award contracts would allow greater opportunities for more small businesses to compete. The Army subsequently put in place multiple-award contracts with regional requirements for particular categories of health care professionals, such as nurses. The Army found that this approach led to more successful outcomes. Despite the use of multiple-award contracts, the potential for more consolidation among task orders remains. We identified several instances where many task orders were placed for the same type of provider in the same area or facility, such as 24 task orders in fiscal year 2011 for medical assistants, 16 separate task orders for Licensed Practical Nurses, 8 for clinical psychologists and 6 for family practitioners, all at the same MTF. Nearly all of the military departments’ contract health care professionals— 96 percent—worked in facilities located on military installations in fiscal year 2011. The costs associated with these contracted health care services provided at on-base facilities are not comparable to off-base facilities for a variety of reasons. For example, significant issues have been identified within the Military Health System cost accounting system that affect the calculation of unit costs. Further, based on available data and interviews with DOD officials, we determined that labor categories, labor costs, and full-time equivalent calculations all vary by military department, and in some cases by facility or contract. In addition, according to Navy officials, market-based salaries for the same type of provider can vary by geographic location. DOD reported information on 114 primary on-base MTFs in the United States with contracted health care professionals. In addition, the military departments identified 8 off-base facilities with contracted health care professionals. Collectively, the Army, Navy, Air Force, and JTF CapMed had 11,253 full-time equivalent (FTE) contract health care professionals within the United States in fiscal year 2011, 96 percent of whom provided care at on-base facilities. Figure 5 shows the number of contract health care professionals, civilian health care professionals, and active duty military health care professionals in fiscal year 2011, by military department and JTF CapMed. Table 4 provides information on the number of contracted FTEs at both on-base and off-base facilities. For a complete list of the MTFs and contract health care professional FTEs reported to us by the military departments, see appendix III. The Army reported that all contract health care professionals worked in 28 primary on-base MTFs. The Army did not report any off-base facilities with contract health care professionals. The Navy reported 21 on-base primary MTFs. Four percent of its contract health care professionals worked in one of four off-base clinics. The Air Force reported 63 on-base primary MTFs. Less than 1 percent of its contract health care professionals worked in one reported off-base clinic associated with the MTF at MacDill Air Force Base as well as one primary MTF, Buckley, which is located off-base. Hospital in 2011. Twenty-three percent of contract health care professionals in the National Capital Region worked in one of two off- base clinics, the Fairfax Health Center and the Dumfries Health Center. Based on available data and interviews with DOD officials, we determined that the costs associated with the provision of care by contract health care professionals at on-base facilities and off-base facilities were not directly comparable for a variety of reasons. First, DOD does not collect and maintain standardized data on health care professionals that would allow for comparisons of the cost of facilities across the military departments, or even within a military department from one facility to another. For example, labor categories are not standardized across DOD. Labor costs, including salary, benefits, overtime, and other costs vary by military department and by contract, and the definition of an FTE employee varies by military department. Second, DOD’s Task Force report on the Future of Military Health Care concluded that there were significant issues with the Military Health System cost accounting that affect the correct calculation of unit costs. For example, reported workload data have been characterized as unreliable. DOD and military department officials we spoke with confirmed this assessment during our review. Third, the financial and data systems used by MTFs are not set up to differentiate between the cost of care provided by contract health care professionals versus the cost of care provided by civilian and active duty health care professionals. Finally, market-based salaries for health care professionals vary by geographic location and by specialty. For example, the salary for a chiropractor in Washington, D.C. is significantly higher than the salary for a chiropractor in the Portsmouth, Virginia area. Therefore, comparing the costs associated with contract health care professionals at off-base facilities to any contract health care professionals at on-base facilities that were not working in the same geographic area would not result in an appropriate comparison. DOD medical services contracting personnel are subject to DOD-wide training requirements, and health care experience varies for these personnel. The military departments provide CORs, but usually not contracting officers, with specialized training in contracting for health care professionals in addition to DOD’s requirements. The training provided to contracting officers is generally not targeted to any specific area of acquisition, including health care. Contracting officers are federal employees with the authority to bind the government by signing a contract. Contracting officers across DOD are subject to the Defense Acquisition Workforce Improvement Act (DAWIA) requirements, which specify mandatory acquisition training and experience standards for DOD’s acquisition workforce. DOD requires all contracting officers to meet the same DAWIA requirements, regardless of any specialization. The training offered by the Defense Acquisition University (DAU) provides a foundation for acquisition and career field knowledge, and is not targeted to specific jobs, including the award and administration of contracts for health care professionals. In this regard, contracting officers responsible for awarding and administering contracts for health care professionals are no different than DOD contracting officers working in other areas. In addition to DOD-wide requirements, contracting officers responsible for contracts for health care professionals generally have access to health care-specific acquisition expertise within their organizations, according to officials from the Army and Navy. For example: Contracting officers at Army’s Health Care Acquisition Activity work with the Medical Services Portfolio Manager, who serves as a resource for both health care and acquisition expertise and assists in the development of performance work statements and source selection. Navy contracting officers gain knowledge on the job through collocation at the Navy Medical Logistics Command with experienced contracting officers and program analysts with health care specialties. The Army and Navy medical commands both have contracting officers with primary responsibilities related to the award and administration of contracts for health care professionals. The Army contracting officers we spoke with at the San Antonio Military Medical Center each reported having worked in medical contracting for over 8 years, awarding hundreds of contracts for health care professionals. Similarly, a Navy official provided documents showing that most Navy contracting officers responsible for contracts for health care professionals each has at least 3 years of experience in medical contracting. The Air Force relies on base contracting offices to support contracting for health care professionals at its 63 MTFs and, in contrast to both the Army and Navy, most Air Force contracting officers responsible for awarding and administering contracts for health care professionals are also responsible for the acquisition of non-medical products and services. As a result, according to Air Force officials we spoke with, Air Force acquisition professionals involved in medical services require additional training in the use of an approach often used in this area, personal services contracting, which is characterized by the employer-employee relationship it creates between the government and the contractor’s personnel. These contracts, expressly or as administered, make the contractor personnel appear to be, in effect, government employees. Personal services contracts are generally prohibited; however, personal services contracts for professional medical services for DOD are authorized by law. Although all three services use personal services contracts to obtain health care professionals, additional training in this area is not necessary for the Army and Navy, according to a DAU official we spoke with, since these departments’ contracting organizations have acquisition professionals who frequently work with personal services contracts. However, Air Force officials we spoke with reported that Air Force contracting personnel would benefit from increased attention to personal services contracting in the health care context, because, unlike most government contractors, health care professionals are subject to the direction and supervision of the government. CORs are federal employees designated by the contracting officer to perform certain contract administration duties. All CORs must meet training and experience requirements specified in DOD’s Standard for Certification of COR for Service Acquisitions issued in March 2010. This standard defines DOD-wide minimum COR competencies, experience, and training for three types of COR requirements, according to the complexity of requirements and contract performance risk. Prior to contract award, all CORs are required to take a basic 8-hour online training course, provided through DAU. In addition to DOD-wide requirements, contracting personnel we spoke with said CORs receive contract-specific training from the appointing contracting officer. CORs may also receive supplemental training provided by the military departments in medical services contracting. Table 5 describes the type of training provided to CORs by each military department. Contracting oversight begins when the MTF nominates and the contracting officer appoints CORs to monitor and report on contractor performance. Importantly, CORs may not direct the work of the contractor by making commitments or changes that affect price, quantity, quality, delivery, or other terms and conditions of the contract. Within DOD’s Military Health System, CORs oversee the performance of contract health care professionals, including the review of contractor invoices and documenting and reporting on the performance of health care professionals to the contracting officer. The contract oversight model for DOD’s Military Health System is different than typical DOD acquisitions, because the military departments reported regularly using personal services contracts when contracting for health care professionals who, as described above, are subject to the direction and supervision by the government. In contrast to typical DOD contracting oversight arrangements, supervision of contract health care professionals is typically accomplished by a government employee at the MTF. In these instances, government supervisors, who are usually health care providers, work with the COR as they oversee and report on the performance of contract health care professionals. The level of experience and type of responsibilities of CORs assigned to medical services contracts varies by MTF location. CORs we spoke with had responsibilities ranging from the oversight of only a few contract health care professionals to more than 100 professionals. Further, some CORs are full-time and dedicated solely to overseeing contracts for health care professionals. For selected locations, we observed the following: For the Army, a COR working at the San Antonio Military Medical Center with a professional background as a budget analyst reported that CORs in this location have backgrounds ranging from administrative professionals to physicians. This COR was responsible for other duties at the large facility in addition to overseeing the performance of 130 FTE health care professionals. The COR for personal services contracts at Fox Army Health Center, a health systems specialist and former Army medic, was responsible for additional duties as the Chief of Clinical Operations. The official had 12 years of experience as a COR on medical services contracts. For the Navy, personnel assigned as CORs at the Portsmouth Naval Medical Center and one of its branch clinics work on a full-time basis overseeing approximately 100 health care professionals each. This group’s experience ranged from members with less than 1 year to those with over 20 years of experience. The COR at Navy’s Saratoga Springs Branch Clinic was classified as a health systems specialist and had over 20 years of experience as a COR. This COR was located off-site and was responsible for other duties in addition to overseeing approximately 30 task orders for health care professionals. For Air Force, the service contract manager at Wilford Hall Ambulatory Surgical Clinic is the primary COR and government supervisors are alternate CORs. However, this is an arrangement that is unique to that facility, according to Air Force officials. The COR for personal services contracts at Andersen Air Force Base reported having worked in this capacity for about a year and a half and was responsible for other duties related to medical logistics. Prior to assignment as a COR at Andersen Air Force Base, this official had over 4 years of experience in contract services at a large MTF. Section 732 of the NDAA for FY 2007 directed the Secretary of Defense to require consistent quality standards for contract health care professionals and the staffing companies that provide them across all of the military departments’ MTFs. According to DOD officials, DOD did not require consistent quality standards or take any additional actions in response to this legislation—such as by establishing a specific policy or guidance—because officials believed the military departments were already applying these standards as part of their contracting processes. We found that each of the departments had policies or procedures in place that generally address most of the NDAA for FY 2007 quality standards. The NDAA for FY 2007 requires consistent credentialing requirements among MTFs. Credentialing is the process of inspecting and authenticating the documentation for appropriate education, training, licensure, and experience for health care professionals. Privileging is the corresponding process that defines the scope and limits of practice for a health care professional based on their relevant training and experience, current competence, peer recommendations, and the capabilities of the facility where the health care professional is practicing. The Assistant Secretary of Defense for Health Affairs is responsible for developing and overseeing DOD’s credentialing and privileging requirements for health care professionals to ensure consistent application across the Military Health System. To implement DOD’s requirements, the military departments’ surgeons general—who are delegated responsibility by the secretaries of their respective departments—establish specific credentialing and privileging requirements, which their MTFs are required to follow. In this review, we found that DOD and the military departments already had policies and procedures in place for the credentialing and privileging of health care professionals; however, these requirements are not yet consistent across the military departments. We previously reported in December 2011 that the military departments had established requirements that were in some cases inconsistent with DOD’s requirements and each other’s. In response, DOD and military department officials reported taking steps to standardize the credentialing and privileging processes across DOD. For example, the Navy took steps to align its policy with DOD’s by changing its requirement for primary source verification to apply to all provider licenses ever held instead of just those licenses held in the past 10 years. Additionally, DOD and VA formed a workgroup in July 2012—which also included officials from the Army, Navy, Air Force, and JTF CapMed—in order to standardize the credentialing and privileging processes across the military departments, and eventually with VA, so that health care professionals could more easily move between DOD and VA facilities. As part of this effort, the workgroup was tasked with exploring the possibility of developing a joint credentialing software system for use by both DOD and VA. A DOD official told us that the workgroup expects to issue recommendations by June 2013. The NDAA for FY 2007 also requires consistent quality standards for the staffing companies that provide contract health care professionals to the MTFs, including, at a minimum, the Joint Commission’s Health Care Staffing Services certification standards. The 2011 version of the Health Care Staffing Services certification standards includes 23 standards that cover four topic areas: (1) leadership, (2) human resources management, (3) information management, and (4) performance measurement and improvement. (See appendix IV for a list of these standards.) We found that because DOD did not require the military departments to use consistent quality standards for staffing companies as outlined in the NDAA for FY 2007, the military departments did not have policies or procedures in place for each of the Joint Commission’s Health Care Staffing Services certification standards. However, each of them was able to provide examples of regulations, policies, or military department- wide standardized contracting language that they thought addressed many of these standards. The Air Force was able to provide similar documentation for its centrally administered health care professional contracts. The Air Force also was able to provide examples of regulations and policies for its other health care professional contracts, which are awarded and managed at the individual MTF level, but it could not provide standardized language for these contracts. We determined that, in most cases, the documentation provided by each of the military departments generally addressed the individual Joint Commission standards for staffing companies that provide health care professionals. For example: For the Joint Commission requirement that the staffing company have a code of business ethics, each of the military departments provided citations to federal regulations or standardized contract clauses that required the staffing company to have an ethics code. In some of these cases, the military departments addressed the individual Joint Commission standards by providing policies or standardized contract language that required the military departments to perform the tasks themselves instead of expecting them to be addressed by the staffing company. For example: For the Joint Commission requirement that the staffing company provide orientation to clinical staff, each of the military departments cited standardized contract clauses that would require contract health care professionals to participate in orientation and initial job training provided by the MTF. However, the documentation provided by the military departments did not always appear to address certain Joint Commission standards. For example: For the Joint Commission requirement that the staffing company clearly define its leadership roles, one of the military departments cited standardized contract language that required the staffing company to provide a point of contact, but did not address company leadership roles. In addition to the Joint Commission standards for staffing companies, the NDAA for FY 2007 also requires additional standards covering financial stability, medical management, continuity of operations, training, employee retention, access to contractor data, and fraud prevention. We found that each of the military departments provided documentation that generally addressed the additional standards for staffing companies listed in the NDAA for FY 2007. For example, each of the military departments provided citations to federal regulations that addressed fraud prevention for the staffing companies. DOD has undertaken numerous studies concerning the governance of the Military Health System. Performed by both internal and external boards, commissions, task forces, and other entities, a number of these studies recommended dramatic changes in the organizational structure of the Military Health System, in part to address the fragmented approach that the military departments take to contracting for professional medical services. While the military departments generally agreed with the need for improvements to their respective requirements determination processes, fragmentation in requirements and contracting arrangements persist because DOD has introduced change in its management and oversight of the Military Health System in an incremental and limited manner. In the absence of a DOD-wide approach for the acquisition of medical services, each military department continues to take a fragmented approach to contracting for medical professionals without considering the collective needs of the Military Health System. However, DOD is in the process of revising the governance structure of the Military Health System to centralize certain functions, such as acquisitions, that are fragmented among the military departments. Consequently, now is a particularly opportune time to revisit the need for a DOD-wide strategic sourcing strategy with both near-term and long-term dimensions, including reliable and detailed agency-wide data. Without such a strategy, the Military Health System may be missing opportunities for acquiring professional medical services in the most cost effective manner. To achieve additional cost savings and efficiencies through increased use of strategic sourcing, we are recommending that the Secretary of Defense develop and implement a DOD-wide strategy to contract for health care professionals. The strategy should identify specific responsible organizations and timeframes, and should consist of both near-term and long-term components: In the near term, and to enable DOD to assess the efficacy and impact of such a strategy, DOD should identify a category of health care professionals or a multi-service market to pilot an approach to consolidating health care staffing requirements. Over the longer term, such a strategy should include an analysis of medical services spending based on reliable and detailed agency- wide data, and should enable DOD to identify opportunities to consolidate requirements and reduce costs. We provided a draft of this report to DOD for comment. In its written comments, reproduced in appendix V, DOD concurred with our recommendation. The department also agreed that it is at an opportune time to revisit a Military Health System strategic sourcing strategy due to the organizational transformation that is occurring in the stand-up of the new Defense Health Agency. DOD stated that a Shared Services Contracting subworking group will include this report and its recommendations in their comprehensive review of contracting strategies, governance, and processes. DOD anticipates that the subworking group will present their final recommendations to senior leadership by August 2013. DOD also provided technical comments that were incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or woodsw@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To determine what contracting practices are used by the military departments to contract for medical services as well as what is known about the cost effectiveness of these practices, we analyzed fiscal year 2011 data obtained from the Federal Procurement Data System-Next Generation on medical service contracts to determine the extent to which the military departments used particular contracting practices as well as the types and amount of medical services that were purchased. We obtained data on all contracts and task orders that were coded as medical services and were active in fiscal year 2011 from the Federal Procurement Data System-Next Generation, including information on obligation value, contract payment type, a general description of the type of service, and the extent of competition. To assess the reliability of the data we looked for missing values and obvious errors and found the data were sufficiently reliable for the purposes of our analysis and findings. We interviewed officials from Army’s Health Care Acquisition Activity, the Navy Medical Logistics Command, the Air Force Medical Service group, the Joint National Capital Region Medical Command (JTF CapMed), and TRICARE Management Activity (TMA). Additionally, we obtained and analyzed information from these officials on the contracting arrangements the departments used and any cost effectiveness studies that had been completed. We also reviewed GAO reports on costs and outcomes associated with different contracting approaches. To determine the extent to which the military departments have consolidated health care staffing requirements and to what effect, we obtained data from each of the military departments as well as JTF CapMed on the number and dollar value of department-identified contracts with consolidated staffing requirements, including joint contracts, contracts for medical services at joint military treatment facilities (MTF), and multiple-award indefinite-delivery, indefinite-quantity contracts. We relied on this data to present the percentage and total dollar value of multiple-award indefinite-delivery, indefinite-quantity contracts awarded by the military departments that were active in fiscal year 2011. To assess the reliability of these data we interviewed officials from the military departments on how they ensured the data were accurate and reliable, and tested the data for any missing values or obvious errors and then followed up with officials to obtain corrected data. We found the data were sufficiently reliable for the purposes of our analysis and findings. We spoke with officials from the military departments’ contracting organizations to determine if cost savings could be demonstrated based on the use of multiple-award contracts. We also reviewed past reports by the Department of Defense (DOD), GAO, and others. To determine the percentage of contract health care professionals who work at on-base MTFs versus off-base facilities, we requested that the three military departments, JTF CapMed, and TMA provide us data on the type, number of, and total obligations in fiscal year 2011 associated with contract health care professionals providing direct patient care at all MTFs and associated off-base clinics within the United States and her territories in fiscal year 2011. For the purposes of our review, we collected data on parent MTFs and those off-base facilities which were under the purview of the MTF commander, were physically located outside the military installation, received some direct care dollars, and employed contract health care professionals. We defined a health care professional as an individual providing primary, specialty, or ancillary services at an MTF or associated off-base clinic who has received specialized training or education in a health related field. We excluded from our scope MTFs located outside of the United States; and we also excluded contracts for research and development-related services, dental and veterinary professions, as well as administrative, janitorial, food, or housekeeping services. To assess the accuracy and completeness of the reported data, we interviewed officials from the military departments on how they ensured the data were accurate and reliable. We also tested the data for any missing values or obvious errors and then followed up with officials to get corrected data. Based on our analyses and discussions with military department officials, we determined that caution should be exercised when using their data to draw conclusions about the actual number of contracted health care professionals in MTFs for any given time period. However, because we are presenting the reported data at a level where they describe a high level overview of the number of contract health care professionals providing care at MTFs during our period of review, we believe the data are sufficiently reliable for the purposes of our review. To determine the extent to which costs associated with contract health care professionals at on-base and off-base facilities could be compared, we met with contracting officials from the military departments, TMA, and personnel at select MTFs to discuss the data we had received and the proposed cost comparison. Based on the data received from the military departments and these discussions, we concluded that DOD had not conducted a similar cost comparison of on-base and off-base facilities, and that the military department-reported data could not be used to compare costs associated with on- and off-base facilities since the number of off-base facilities was limited and costs associated with the different facilities could not be appropriately compared for a number of reasons, as indicated in this report. We were able to determine that the data were sufficiently reliable to present information on the number of and aggregate costs associated with contract health care professionals, but not for the purposes of a comparison of costs associated with on-base versus off-base facilities. To determine the training requirements and experience of personnel responsible for awarding and administering contracts for health care professionals, we interviewed officials from the military departments’ contracting organizations and collected supporting documentation on DOD-wide and military department-specific policy and requirements. For the purposes of this review, we limited the scope of our analysis to contracting officers and contracting officer’s representatives (COR). We obtained additional descriptive information on the specific health care and acquisition training and experience of contracting personnel at selected medical facilities from each military department with small and large numbers of contract health care professionals on staff. We selected these facilities based on the number of full-time equivalent (FTE) contract health care professionals the military departments reported to be working at each MTF in fiscal year 2011. We also visited selected MTFs, including three of the seven off-base clinics identified by the military departments, and interviewed officials with knowledge of the training and experience of contracting officers and CORs. Finally, we interviewed an official from the Defense Acquisition University regarding training in personal services contracts. We did not evaluate training records of contracting officers or CORs for sufficiency. To determine the extent to which the military departments have policies or procedures that generally address legislated quality standards for contract health care professionals and the staffing companies that provide these professionals, we obtained documentation such as federal regulations, DOD and military department-level policies and procedures, and military department-wide standardized contracting language, as provided to us by each of the military departments and JTF CapMed. We reviewed this documentation to assess whether it generally addressed the legislated quality standards in the National Defense Authorization Act for Fiscal Year 2007, including the 2011 version of the Joint Commission’s Health Care Staffing Services standards. We also interviewed officials from DOD, each of the military departments, and JTF CapMed to better understand how, if at all, the legislated quality standards were incorporated into their policies and procedures for contracting for health care professionals. Our analysis focused on whether the policies and procedures generally addressed the legislated standards; we did not assess the military departments’ compliance with these standards. To gain insight applicable to all objectives, we selected a nongeneralizable sample of MTFs based on the military department, the location, and the number of contract health care professionals at each facility. We met with officials from Ft. Belvoir Community Hospital, Walter Reed National Military Medical Center, Fairfax TRICARE Clinic, Dumfries TRICARE Clinic, San Antonio Military Medical Center, Portsmouth Naval Medical Center, and the TRICARE Prime Clinic Virginia Beach. We held meetings or received written responses to questions from officials at Fox Army Health Center, Saratoga Springs Naval Health Clinic, Wilford Hall Ambulatory Surgical Center, and Andersen Air Force Base Clinic. While the sample allowed us to learn about many important aspects of, and variations in, contracting for health care professionals in military treatment facilities, it was designed to provide anecdotal information, not findings that would be representative of all MTFs worldwide. See appendix III for complete list of MTFs. We conducted this performance audit from July 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Facility name Grand Forks Air Force Base Buckley Air Force Base (off-base primary military treatment facility) Columbus Air Force Base Brandon Clinic (off-base facility associated with MacDill Air Force Base) Walter Reed National Military Medical Center (Previously the Walter Reed Army Medical Center and the National Naval Medical Center) Dumfries TRICARE clinic (off-base clinic associated with Fort Belvoir Community Hospital, previously Woodbridge Family Health Center) Fort Belvoir Community Hospital (Previously Dewitt Army Community Hospital) Fairfax TRICARE clinic (off-base clinic associated with Fort Belvoir Community Hospital) Appendix IV: The Joint Commission’s Health Care Staffing Services Standards, 2011 1. The health care staffing services (HCSS) firm clearly defines its leadership roles. 2. The HCSS firm has a code of business ethics. 3. The HCSS firm addresses existing or potential conflicts of interest related to its internal and external relationships. 4. The HCSS firm complies with applicable laws and regulations. 5. The services contracted for by the HCSS firm are provided to customers. 6. The HCSS firm is accessible to customers and staff. 7. The HCSS firm addresses the resolution of complaints from customers and staff. 8. The HCSS firm identifies and takes steps to reduce safety risks. 9. The HCSS firm addresses emergency management. 1. The HCSS firm confirms that a person’s qualifications are consistent with his or her assignment(s). 2. As part of the hiring process, the HCSS firm determines that a person’s qualifications and competencies are consistent with his or her job responsibilities. 3. The HCSS firm provides orientation to clinical staff regarding initial job training and information. 4. The HCSS firm assesses and reassesses the competence of clinical staff and clinical staff supervisors. 5. The HCSS firm encourages the improvement of clinical staff competence through ongoing educational activities. 6. The HCSS firm evaluates the performance of clinical staff. 1. Information management processes meet internal and external information needs. 2. The HCSS firm maintains health information and personnel records for clinical staff. 3. The HCSS firm preserves the confidentiality and security of information about clinical staff and customers. 4. The HCSS firm has a process for maintaining continuity of information. 1. The HCSS firm plans an organized, comprehensive approach to performance improvement. 2. The HCSS firm maintains the quality and integrity of its data. 3. The HCSS firm collects data to evaluate processes and outcomes. In addition to the contact named above, Debra A. Draper (Director); Bonnie Anderson (Assistant Director); LaTonya Miller (Assistant Director); Peter Anderson; Lori Atkinson; Jacob Leon Beier; E. Brandon Booth; Richard Burkard; Virginia Chanley; Gayle Fischer; Linda Galib; Julia Kennon; Victoria Klepacz; Heather B. Miller; Jeffrey Mayhew; Kenneth Patton; Carol D. Petersen; and Roxanna Sun made key contributions to this report.
DOD operates a large and complex health care system that employs more than 150,000 military, civilian, and contract personnel working in military treatment facilities. Each military department operates its own facilities, and contracts separately for health care professionals to supplement care provided within these facilities. In fiscal year 2011, these contracts totaled $1.14 billion. In the National Defense Authorization Act for Fiscal Year 2012, Congress mandated that GAO review the military departments' acquisition of health care professional services. This report examines (1) the contracting practices used by the departments and their cost effectiveness; (2) the extent to which the departments consolidate health care staffing requirements; (3) the percentage and associated costs of contract health care professionals working at on-base facilities versus offbase; (4) the training requirements for and experience of medical services contracting personnel; and (5) the extent to which the departments' policies address legislated quality standards for contract civilian health care professionals and for staffing companies that provide these professionals. To conduct this review, GAO reviewed military health care policies, analyzed DOD's fiscal year 2011 procurement and staffing data, and interviewed DOD military health system officials. The military departments--the Army, Navy, and Air Force--generally use competition and fixed-price contracts when contracting for medical professionals. These practices can provide lower prices or reduced risk for the government. The military departments use a number of contract arrangements, including contracts awarded to multiple health care staffing companies, for health care professionals. Military department analyses indicate that multiple-award contracts result in lower prices compared to other contract arrangements. The Department of Defense (DOD) does not have a consolidated agency-wide acquisition strategy for medical services. In the absence of such a strategy, contracting for health care professionals is largely fragmented. For example, the military departments had not consolidated their staffing requirements by developing joint contracts beyond a limited number of instances amounting to about 8 percent of the fiscal year 2011 spending on health care professionals. The departments have made efforts to use multiple-award contracts to consolidate intraservice staffing requirements, but GAO identified several instances where multiple task orders were placed for the same type of provider in the same area or facility. A more consolidated strategic sourcing strategy could allow DOD to acquire medical services in a more cost-effective way. Nearly all of the military departments' 11,253 contract health care professionals--96 percent--worked in 114 on-base military treatment facilities in fiscal year 2011, while the remaining 4 percent worked in 8 off-base clinics. The costs associated with the contracted health care services provided at on-base facilities are not comparable to such costs at off-base facilities for a variety of reasons. For example, some Military Health System cost accounting data have been characterized as unreliable. In addition, according to DOD officials, labor categories, labor costs, and full time equivalent calculations all vary by military department and in some cases by facility, contract, or geographic location, making a cost comparison problematic. DOD medical services contracting personnel are subject to DOD-wide training requirements. Consistent with DOD-wide training for all its contracting officers, DOD does not require health care contracting officers to have specialized training or experience. The required training provides a foundation for career field knowledge and is not targeted to specific types of acquisitions, including contracts for health care professionals. Health care experience among contracting personnel varied by location. Air Force contracting officers are not typically dedicated to medical services contracting, unlike their counterparts in the Army and Navy. The military departments provide contracting officers' representatives, who provide contract oversight, with specialized training in contracting for health care. GAO found that each of the departments has policies or procedures in place that generally address most of the legislated quality standards enacted in 2007 for contract health care professionals and the staffing companies that provide them. However, DOD did not require the military departments to use consistent quality standards in response to this legislation because DOD officials believed that the departments were already applying these standards as part of their contracting processes. GAO recommends that the Secretary of Defense develop a DOD-wide strategic approach to contracting for health care professionals. DOD concurred with the recommendation.
Since the 1960s, the United States has used polar-orbiting and geostationary satellites to observe the earth and its land, ocean, atmosphere, and space environments. Polar-orbiting satellites constantly circle the earth in a nearly north-south orbit, providing global coverage of conditions that affect the weather and climate. As the earth rotates beneath it, each polar-orbiting satellite views the entire earth’s surface twice a day. In contrast, geostationary satellites maintain a fixed position relative to the earth from a high orbit of about 22,300 miles in space. Both types of satellites provide a valuable perspective of the environment and allow observations in areas that may be otherwise unreachable. Used in combination with ground, sea, and airborne observing systems, satellites have become an indispensable part of monitoring and forecasting weather and climate. For example, polar-orbiting satellites provide the data that go into numerical weather prediction models, which are a primary tool for forecasting weather days in advance—including forecasting the path and intensity of hurricanes. Geostationary satellites provide the graphical images used to identify current weather patterns and provide short-term warning. These weather products and models are used to predict the potential impact of severe weather so that communities and emergency managers can help prevent and mitigate its effects. Federal agencies are currently planning and executing major satellite acquisition programs to replace existing polar and geostationary satellite systems that are nearing the end of their expected life spans. However, these programs have troubled legacies of cost increases, missed milestones, technical problems, and management challenges that have resulted in reduced functionality and major delays to planned launch dates over time. We and others—including an independent review team reporting to the Department of Commerce and its Inspector General— have raised concerns that problems and delays with environmental satellite acquisition programs will result in gaps in the continuity of critical satellite data used in weather forecasts and warnings. According to officials at NOAA, a polar satellite data gap would result in less accurate and timely weather forecasts and warnings of extreme events, such as hurricanes, storm surges, and floods. Such degradation in forecasts and warnings would place lives, property, and our nation’s critical infrastructures in danger. The importance of having such data available was highlighted in 2012 by the advance warnings of the path, timing, and intensity of Superstorm Sandy. Given the criticality of satellite data to weather forecasts, concerns that problems and delays on the new satellite acquisition programs will result in gaps in the continuity of critical satellite data, and the impact of such gaps on the health and safety of the U.S. population, we concluded that the potential gap in weather satellite data is a high-risk area. We added this area to our High-Risk List in 2013 and it remains on the 2015 update to the High-Risk List that was issued yesterday. For over 40 years, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar- orbiting Operational Environmental Satellite series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. Currently, there is one operational Polar- orbiting Operational Environmental Satellite (called the Suomi National Polar-orbiting Partnership, or S-NPP) and two operational DMSP satellites that are positioned so that they cross the equator in the early morning, midmorning, and early afternoon. In addition, the government relies on data from a European satellite, called the Meteorological Operational satellite, or Metop.operational polar satellite constellation. In addition to the polar-orbiting satellites, NOAA operates GOES as a two-satellite geostationary satellite system that is primarily focused on the United States (see figure 2). The GOES-R series is the next generation of satellites that NOAA is planning; the satellites are planned to replace existing weather satellites, the first of which is due to reach the end of its useful life in 2015. The ability of the satellites to provide broad, continuously updated coverage of atmospheric conditions over land and oceans is important to NOAA’s weather forecasting operations. NOAA is responsible for GOES-R program funding and overall mission success, and has implemented an integrated program management structure with NASA for the GOES-R program. Within the program office, there are two project offices that manage key components of the GOES-R system. NOAA has delegated responsibility to NASA to manage the Flight Project Office, including awarding and managing the spacecraft contract and delivering flight-ready instruments to the spacecraft. The Ground Project Office, managed by NOAA, oversees the Core Ground System contract and satellite data product development and distribution. The program estimates that the development for all four satellites in the GOES-R series will cost $10.9 billion through 2036. In 2013, NOAA announced that it would delay the launch of the GOES-R and S satellites from October 2015 and February 2017 to March 2016 and May 2017, respectively. These are the current anticipated launch dates of the first two GOES-R satellites; the last satellite in the series is planned for launch in 2024. In September 2010, we recommended that NOAA develop and document continuity plans for the operation of geostationary satellites that include the implementation procedures, resources, staff roles, and time tables needed to transition to a single satellite, a foreign satellite, or other solution. In September 2011, the GOES-R program provided a draft plan documenting a strategy for conducting operations if there were only a single operational satellite. GAO, Geostationary Weather Satellites: Progress Made, but Weaknesses in Scheduling, Contingency Planning, and Communicating with Users Need to Be Addressed, GAO-13-597, (Washington, D.C.: Sept. 9, 2013). on the potential impact of a satellite failure and identifying timelines for implementing mitigation solutions. We subsequently assessed NOAA’s progress in implementing this recommendation in our December 2014 report and will discuss our results at today’s hearing. The JPSS program has recently completed significant development activities. For example, the program completed a major development milestone—the critical design review for the JPSS-1 mission—in April 2014. This is a significant accomplishment because the review affirms that the satellite design is appropriately mature to continue with development. Furthermore, NOAA is currently developing JPSS within its cost and schedule baselines. However, while JPSS development is still within its overall life cycle cost baseline, key components have experienced cost growth. Between July 2013 and July 2014, the total program cost estimate increased by $222 million (or 2 percent). More than half of this increase was for three instruments. Program officials cited multiple reasons for these cost increases, including technical issues, additional testing, and the purchase of new parts. If JPSS costs were to continue to grow at this rate, the program could end up costing $2 billion more than expected by 2025. Therefore, moving forward, it will be important for NOAA and NASA managers to aggressively monitor and control components that are threatening to exceed their expected costs. Also, while the launch date of the JPSS-1 satellite has not yet been affected, key components, such as the satellite’s major instruments, have encountered delays in development and testing. Figure 3 compares key planned completion dates for the JPSS-1 spacecraft and its instruments from July 2013 to their actual or planned completion dates as of July 2014. JPSS program officials provided multiple reasons for the schedule changes, including technical issues the Advanced Technology Microwave Sounder (ATMS) instrument experienced during testing, a schedule adjustment to align with NOAA’s geostationary satellite acquisition, and the October 2013 government shutdown. These delays have caused a reduction in schedule margin prior to the JPSS-1 satellite integration and testing phase. Further, because of the technical issues experienced on ATMS, the instrument has now become the critical path for the entire JPSS-1 mission and only 1 month of schedule reserve remains until its expected delivery in March 2015. It will be important for NOAA and NASA managers to quickly resolve the instrument’s technical issues before it becomes a more serious threat to the mission schedule and launch date. In October 2013, the JPSS program office reported that a gap between the S-NPP satellite and the JPSS-1 satellite in the afternoon orbit could be as short as 3 months, which is 15 months less than NOAA estimated in 2012. However, we believe that this estimate is likely too optimistic. There are several reasons why this potential gap could occur sooner and last longer than NOAA currently anticipates. Inconsistent launch date plans: The program’s analysis that JPSS-1 will be operational by June 2017 is inconsistent with NOAA’s launch date commitment of March 2017, given that the program office estimates 6 months for on-orbit checkout and calibration/validation before the satellite data are operational. Unproven predictions about the on-orbit checkout and validation phase: The on-orbit checkout and calibration/validation phase could take longer than the program’s estimated 6 months if there are issues with the instruments or ground systems. Also, additional algorithm work may be needed after the satellite launches, which could extend the validation time frame. Exclusion of a key risk: The JPSS program’s gap assessment does not factor in the potential for satellite failures from space debris that are too small to be tracked and avoided. Thus, the S-NPP mission could end earlier than its 5-year design life, resulting in a gap period that occurs sooner and lasts longer than expected. As a result, a gap in polar satellite data may occur earlier and last longer than NOAA anticipates. In one scenario, S-NPP would last its full expected 5-year life (to October 2016), and JPSS-1 would launch as soon as possible (in March 2017) and undergo on-orbit testing for 6 months as predicted by the JPSS program office (until September 2017). In that case, the data gap would extend 11 months. Any problems encountered with JPSS-1 development resulting in launch delays, launch problems, or delays in the planned 6-month on-orbit test period could extend the gap period to as much as 5 years and 8 months. Figure 4 depicts possible gap scenarios. NOAA officials acknowledge that the gap assessment has several limitations and stated that they plan to update it. Until NOAA updates its gap assessment to include more accurate assumptions and key risks, the agency risks making decisions based on a limited understanding of the potential timing and length of a gap. Experts within and outside of NOAA identified almost 40 alternatives for mitigating potential gaps in polar satellite data, which offer a variety of benefits and challenges. The alternatives can be separated into two general categories. The first category includes actions to prevent or limit a potential gap by providing JPSS-like capabilities. The second category includes actions that could reduce the impact of a potential gap by (a) extending and expanding the use of current data sources with capabilities similar to the JPSS program; (b) enhancing modeling and data assimilation; (c) developing new data sources; or (d) exploring opportunities with foreign and domestic partners. While all of the alternatives have trade-offs, several alternatives may represent the best known options for reducing the impact of a gap: Extending legacy satellites, continuing to obtain data from European midmorning satellites, and ensuring legacy and European satellites’ data quality remains acceptable; Obtaining additional observations of radio occultation Advancing 4-dimensional data assimilation and the next generation global forecast model to make more efficient use of data still available and produce improved techniques for evaluating data; Increasing high-performance computing capacity, a key factor for enabling greater resolution in existing and future models, which drives the pace of development for assimilation of data that could further improve NOAA’s models. Government and industry best practices call for the development of contingency plans to maintain an organization’s essential functions in the case of an adverse event. NOAA developed its original polar satellite gap contingency plan in October 2012. We reported in September 2013 that NOAA had not yet selected the strategies from its plan to be implemented, or developed procedures and actions to implement the selected strategies and made a recommendation to address these shortfalls. In February 2014, NOAA updated its polar satellite gap contingency plan. NOAA made several improvements in this update, such as including additional alternatives that experts identified, and accounting for additional gap scenarios. However, additional work remains for NOAA’s contingency plan to fully address government and industry best practices for contingency planning. Until NOAA fully addresses key elements to improve its contingency plan, it may not be sufficiently prepared to mitigate potential gaps in polar satellite coverage. NOAA has also experienced challenges in implementing key activities outlined in the plan. Among a list of available alternatives, NOAA identified 21 mitigation projects that are to be implemented in order to address the potential for satellite data gaps in the afternoon polar orbit. NOAA has demonstrated progress by implementing initial activities on these gap mitigation projects. However, NOAA has experienced delays in executing other key activities. For example: A planned upgrade to the National Weather Service’s operational high-performance computing capacity was to occur by December 2014. According to NOAA officials, an interim upgrade is planned to occur in February 2015, with the full upgrade expected to be completed by July 2016. NOAA does not plan to complete observing system experiments that are to supplement its numerical weather prediction models in the absence of afternoon polar-orbiting satellite data until 4 months later than planned. Multiple projects have been affected by a major shortfall in the availability of high-performance computing for research and development efforts during fiscal year 2014. Because a potential near-term data gap could occur sooner and last longer than expected, NOAA’s ongoing gap mitigation efforts are becoming even more critical. According to Office of Management and Budget guidance, projects that require extensive development work before they can be put into operation are inherently risky and should be prioritized by comparing their costs and outcomes to other projects within a portfolio. However, the agency has not prioritized or accelerated activities most likely to address a gap because it has been focused on implementing many different initiatives to see which ones will have the most impact. NOAA officials stated that further prioritization among mitigation activities was not warranted because the activities were fully funded and were not dependent on the completion of other activities. We disagree. There are dependencies among projects that would benefit from prioritization. While it makes sense to investigate multiple mitigation options, unless NOAA assesses the activities that have the most promise and accelerates those activities, it may not be sufficiently prepared to mitigate near-term data gaps. After spending 10 years and just over $5 billion, the GOES-R program has completed important steps in developing its first satellite, and has entered the integration and test phase of development for the satellite. While the GOES-R program is making progress, it has experienced recent and continuing schedule delays. As we have previously reported, problems experienced during the integration and test phase often lead to cost and schedule growth. In 2013, we reported that technical issues on both the flight and ground projects had the potential to cause further delays to the program schedule. By the time of our latest report, in December 2014, these and all other major milestones have been further delayed by 5 to 8 months. The GOES-R program cited multiple reasons for these recent delays, including challenges in completing software deliverables and completing communication testing for the spacecraft. In addition to these intermediate delays, NOAA moved the launch commitment date of the first GOES-R satellite to March 2016. Further, the program’s actions to mitigate schedule delays introduce some risks, and could therefore increase the amount of the delay. For example, the program attempted to mitigate delays by performing system development while concurrently working on detailed planning. In addition, the program has responded to prior delays by eliminating selected repetitive tests and moving to a 24-hour-a-day, 7-day-a-week spacecraft integration testing schedule. We have previously reported that overlapping planning and development activities and compressing test schedules are activities that increase the risk of further delays because there would be little time to resolve any issues that arise. A key element of a successful test phase is appropriately identifying and handling any defects or anomalies that are discovered during testing. While the GOES-R program has sound defect management policies in place and is actively performing defect management activities, there are several areas in which defect management policies and practices are inconsistent. Among the shortfalls are a number of cross-cutting themes, including in performing and recording information pertinent to individual defects, and in reporting and tracking defect information. The GOES-R program has also not efficiently closed defects on selected components. Specifically, data for the GOES ground system shows that 500 defects remained open as of September 2014. Defect data for the spacecraft show that it is also taking an increasing amount of time to close hardware-related defects. Until the program addresses shortfalls in defect management and reduces the number of open defects, it may not have a complete picture of remaining issues and faces an increased risk of further delays to the GOES-R launch date. The program is now reaching a point where additional delays in starting end-to-end testing could begin to adversely affect its schedule. As of August 2014, program officials could not rule out the possibility of further delays in the committed launch date. GOES satellite data are considered a mission-essential function because of their criticality to weather observations and forecasts. Because of the importance of GOES satellite data, NOAA’s policy is to have two operational satellites and one backup satellite in orbit at all times. However, NOAA is facing a period of up to 17 months when it will not have a backup satellite in orbit. Specifically, in April 2015, NOAA expects to retire one of its operational satellites (GOES-13) and to move its backup satellite (GOES-14) into operation. Thus, the agency will have only two operational satellites in orbit—and no backup satellite—until GOES-R is launched and completes an estimated 6-month post-launch test period. Figure 5 shows the potential gap in backup coverage, based on the launch and decommission dates of GOES satellites. During the time when no back-up satellite would be available, there is a greater risk that NOAA would need to either rely on older satellites that are beyond their expected operational lives and may not be fully functional, rely on a foreign satellite, or operate with only a single operational satellite. Due in part to the risks mentioned above, NOAA is also facing an increased risk of further delays to the March 2016 GOES-R launch date. Any delay to the GOES-R launch date would extend the time without a backup to more than 17 months. Government and industry best practices call for the development of contingency plans to maintain an organization’s essential functions—such as GOES satellite data—in the case of an adverse event. In September 2013, we reported on weaknesses in the contingency plans for NOAA’s geostationary satellites. NOAA has improved its plan to mitigate gaps in satellite coverage. In February 2014, NOAA released a new satellite contingency plan in response to these recommendations. This plan improved upon many, but not all, of the best practices. Specifically, the plan improved in six areas and stayed the same in four areas. GOES-R program officials stated that it is not feasible to include strategies to prevent delays in launch of the first GOES-R satellite in the contingency plan, because such strategies are not static. While actively managing the program to avoid a delay is critical, it is also important that NOAA management and the GOES-R program consider and document feasible alternatives for avoiding or limiting such a launch delay. Until NOAA addresses the remaining shortfalls in its GOES-R gap mitigation plan, the agency cannot be assured that it is exploring all alternatives or that it is able to effectively prepare to receive GOES information in the event of a failure. Both the JPSS and GOES-R programs continue to carry risks of future launch delays and potential gaps in satellite coverage; implementing the recommendations in our December 2014 reports should help mitigate those risks. In the JPSS report released in December, we recommended, among other things, that NOAA update the JPSS program’s assessment of potential polar satellite data gaps to include more accurate assumptions about launch dates and the length of the data calibration period, as well as key risks such as the potential effect of space debris on JPSS and other polar satellites expected lifetimes; revise its existing contingency plan to address shortfalls noted in the 2014 report, such as identifying DOD’s and Japan’s plans to continue weather satellite observations, including recovery time objectives for key products, completing the contingency plan with selected strategies, and establishing a schedule with meaningful timelines and linkages among mitigation activities; and investigate ways to prioritize mitigation projects with the greatest potential benefit to weather forecasting in the event of a gap in JPSS satellite data. In the GOES report released in December, we recommended that NOAA, among other things, add information to the GOES satellite contingency plan on steps planned or underway to mitigate potential launch delays. For both reports, NOAA agreed with our recommendations and identified steps it plans to take to implement them. Specifically, with regard to the JPSS report, NOAA stated that it will make the necessary changes to its gap mitigation report and establish a process to prioritize mitigation projects. With regard to the GOES report, NOAA stated that it would add information to the GOES satellite contingency plan on steps planned or underway to mitigate potential launch delays. In summary, NOAA has made progress on both the JPSS and GOES-R programs, but key challenges remain before the new satellites are launched and operational, and it is important that the agency take action to ensure that potential near-term gaps in satellite data are minimized or mitigated. On the JPSS program, NOAA has recently completed significant development activities and is working to launch its next polar-orbiting environmental satellite as soon as possible. However, the program continues to face increasing costs and schedule delays on key components. Further, the program’s estimate of a 3-month potential gap in satellite data may be overly optimistic because it was based on inconsistent and unproven assumptions and did not account for key risks. NOAA has made improvements to its polar satellite gap contingency plan, but has experienced delays in executing key mitigation activities, and has not prioritized or accelerated activities most likely to address a gap. On the GOES-R program, progress in moving through integration and testing has been accompanied by challenges in maintaining its schedule on major milestones and controlling costs for key components. Further schedule delays could affect the committed launch date of the first GOES satellite. NOAA could experience a gap in satellite data coverage if GOES-R is delayed further and one of the two remaining operational satellites experiences a problem. NOAA has made improvements to its geostationary satellite contingency plan, but the plan still does not sufficiently address mitigation options for a launch delay. Faced with an anticipated gap in the polar satellite program and a potential gap in backup coverage on the geostationary satellite program, NOAA has taken steps to study alternatives, establish mitigation plans, and improve its satellite contingency plans. However, these plans do not yet sufficiently address options to mitigate such gaps. Until NOAA prioritizes mitigation activities with the greatest potential to reduce the impact of gaps in weather forecasting, it may not be sufficiently prepared to mitigate them. Chairman Bridenstine, Ranking Member Bonamici, Chairman Loudermilk, Ranking Member Beyer, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Other key contributors include Colleen Phillips (assistant director), Alexander Anderegg, Christopher Businsky, Shaun Byrnes, Kara Lovett Epperson, Rebecca Eyler, Nancy Glover, Franklin Jackson, Nicole Jarvis, Joshua Leiling, James MacAulay, Lee McCracken, Karl Seifert, Kate Sharkey, and Shawn Ward. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
NOAA is procuring the next generation of polar and geostationary weather satellites to replace aging satellites that are approaching the end of their useful lives. Both new sets of satellites will provide critical weather forecasting data over the next two decades. GAO has reported that gaps in polar satellite coverage and in backup coverage for geostationary satellites are likely in the near future. Given the criticality of satellite data to weather forecasts, concerns that problems and delays on the new satellite acquisition programs will result in gaps in the continuity of critical satellite data, and the impact of such gaps on the health and safety of the U.S. population, GAO added mitigating weather satellite gaps to its High-Risk List in 2013 and it remains on the 2015 update to the High-Risk List. GAO was asked to testify on two recently released reports on NOAA's satellite programs, specifically on (1) the JPSS program's status, the potential for a gap and mitigation alternatives, and contingency plans, and (2) the GOESR program's status, potential for a gap, and contingency plans. The National Oceanic and Atmospheric Administration's (NOAA) $11.3 billion Joint Polar Satellite System (JPSS) program has recently completed significant development activities and remains within its cost and schedule baselines; however, recent cost growth on key components is likely unsustainable, and schedule delays could increase the potential for a near-term satellite data gap. In addition, while the program has reduced its estimate for a near-term gap in the afternoon orbit, its gap assessment was based on incomplete data. A gap in satellite data may occur earlier and last longer than NOAA anticipates. The figure below depicts a possible 11-month gap, in which the current satellite lasts its full expected 5-year life (until October 2016) and the next satellite is launched in March 2017 and undergoes on-orbit testing until September 2017. Multiple alternatives to prevent or reduce the impact of a gap exist. Key options for reducing the impact of a near-term gap include extending legacy satellites, obtaining additional observations such as data from aircraft, advancing data assimilation and a global forecast model, and increasing high performance computing capacity. While NOAA has improved its contingency plan by identifying mitigation strategies and specific activities, the agency's plan has shortfalls such as not assessing the cost and impact of available alternatives. In addition, NOAA has not yet prioritized mitigation projects most likely to address a gap, and key mitigation projects have been delayed. Until the agency addresses these shortfalls, the agency will have less assurance that it is prepared to deal with a near-term gap in polar satellite coverage. NOAA's $10.8 billion Geostationary Operational Environmental Satellite-R (GOES-R) program has also made major progress on its first satellite. However, the program has continued to experience delays in major milestones and has not efficiently closed defects on selected components, both of which could increase the risk of a launch delay. As the GOES-R program approaches its expected launch date of March 2016, it faces a potential gap of more than a year during which an on-orbit backup satellite would not be available. Specifically, there could be no backup from April 2015 (when an operational satellite is expected to reach its end-of-life) through September 2016 (after GOES-R completes its post-launch test period). Any delay to the GOES-R launch date would extend the length of time without a backup satellite and, if an operational satellite were to experience a problem during that time, there could be a gap in GOES coverage. NOAA has improved its plan to mitigate gaps in satellite coverage, but it does not yet include steps for mitigating a delayed launch. In its recently issued reports, GAO recommended that NOAA update its polar data gap assessment, address shortfalls in both its polar and geostationary contingency plans, and prioritize mitigation projects most likely to address a gap in polar satellite coverage. NOAA concurred with GAO's recommendations and identified steps it is taking to implement them.
The MTW demonstration program was authorized by the Omnibus Consolidated Rescissions and Appropriations Act of 1996. The program is intended to give participating agencies the flexibility to design and test innovative strategies for providing and administering housing assistance. To implement such strategies, participating agencies may request waivers of certain provisions in the United States Housing Act of 1937, as amended.provisions in order to combine the funding they are awarded annually from different programs into a single, authoritywide funding source. Requirements outside of the 1937 Housing Act, such as fair housing rules, cannot be waived under MTW. In addition, certain sections of the act, including those that cover labor requirements and the demolition and disposition of public housing, cannot be waived. The 1996 act that created the program requires participating agencies to address three purposes and meet five requirements. Specifically, the three statutory purposes are to (1) reduce costs and achieve greater cost-effectiveness in federal housing expenditures, (2) give families with children incentives to obtain employment and become self-sufficient, and (3) increase housing choices for low-income families. For example, to reduce administrative costs MTW agencies can reduce the frequency of income verifications for households with fixed incomes. In making these changes, MTW agencies must For example, agencies may request to waive certain 1. serve substantially the same total number of eligible low-income families that they would have served had funding amounts not been combined; 2. maintain a mix of families (by family size) comparable to those they would have served without the demonstration; 3. ensure that at least 75 percent of households served are very low 4. establish a reasonable rent policy to encourage employment and self- 5. assure that the housing they provide meets HUD’s housing quality standards. The program’s ultimate goal is to identify successful approaches that can be applied to PHAs nationwide. The 1996 act authorized MTW for 30 agencies. HUD invited PHAs to apply for the program and selected an initial cohort of 24 PHAs from among the respondents. Six more were added as a result of the Jobs- Plus initiative. Since then, some agencies have opted not to participate, and additional agencies have been added to replace them. Other agencies have been added through specific appropriations language (see fig. 1). In addition, some agencies have completed their participation and exited the program. As of January 2012, a total of 39 PHAs were authorized to participate, and 35 were participating. MTW agencies do not receive special funding allocations. Rather, they receive funds from the three traditional primary funding sources (public housing capital funds, public housing operating funds, and Housing Choice Vouchers). Traditionally, PHAs have been required to use the funds from each separate source only for specific purposes, but MTW agencies may combine the money from the three sources and use them for a variety of HUD-approved activities. This fungibility is intended to give MTW agencies greater flexibility. For example, public housing operating funds are traditionally used to make up the difference between the rents charged for units and the actual cost of operating them. Capital funds are used for modernization efforts and management improvements, while voucher funds provide rental assistance in the private market. However, by combining funds an MTW agency may use public housing capital funds to issue additional vouchers or use voucher funds to develop more public housing to better fit the needs of its community. MTW agencies also have the authority to use their funds to implement innovative activities that differ from traditional housing assistance. For instance, an MTW agency can use funds to replace public housing with mixed-income communities or reach special-needs populations, such as the homeless, using vouchers paired with supportive services. A Standard Agreement, executed in 2008 to replace individual contracts between HUD and participating PHAs, governs the conditions of participation in the program. HUD enters into this agreement with each MTW agency. HUD created the agreement to standardize the language in its contracts and its reporting requirements and to help create a more operationally sound program. The Standard Agreement includes a termination date (the end of each agency’s 2018 fiscal year) and an attachment that sets out reporting requirements (Attachment B). While much of the Standard Agreement is the same for all MTW agencies, two sections are tailored to individual agencies: a description of the formulas for determining the amounts of funding each agency will receive and an optional section that may include some agency-specific authorizations. MTW agencies with less than 10 percent of their housing stock in the MTW program continue to submit the 5-Year and annual plans required by Section 5A of the 1937 Housing Act. Only information not included in these documents would need to be included in a supplemental annual MTW plan. contents that lists the information that agencies must include in their annual plans and reports. For example, the plan must include, among other things, a description of how each planned activity relates to at least one of the three purposes of the program; baselines, proposed benchmarks, and proposed metrics for assessing the outcomes of each activity; citations of the authorizations that give the agency the flexibility to conduct the activity; and descriptions of required rent reform activities. In addition, the plan must include a certification that the agency published a notice of plans to hold a public hearing on the plan, made the agency’s annual plan available for public inspection, and conducted a public hearing to discuss the annual plan prior to its approval. Similarly, the Standard Agreement outlines the information that MTW agencies are required to include in annual MTW reports. These reports must include, for example, detailed information on the impact of each activity, including comparisons of actual outcomes to the benchmarks proposed in the annual plan. If the agencies do not achieve the benchmarks or the activities are determined to be ineffective, the MTW agency is required to describe the challenges, and, if possible, identify alternative activities that may be more effective. MTW agencies also are required to self-certify that they are in compliance with three statutory requirements: assisting substantially the same total number of eligible low-income families that they would have served had funding amounts not been combined; maintaining a mix of families (by family size) comparable to those they would have served had funding amounts not been combined under the demonstration; and ensuring that at least 75 percent of households served are very low income. As well as meeting the requirements in the Standard Agreement, MTW agencies must submit tenant-related data into the Moving to Work section of the Public and Indian Housing Information Center (MTW-PIC). According to HUD officials, the MTW-PIC module was created because the standard PIC system that non-MTW agencies use to report tenant data could not accommodate some of the activities allowed under MTW, such as less frequent tenant income recertifications and rent calculations that vary from HUD’s standard calculations. The MTW-PIC module was created in 2007, and most MTW agencies had transitioned to it by 2008. The Office of Public Housing Investments within the Office of Public and Indian Housing at HUD headquarters is the designated program office for the MTW demonstration program. Within the Office of Public Housing Investments is an MTW Office that includes a program director and four coordinators who are each assigned to a specific group of MTW agencies. The MTW Office is responsible for, among other things, processing, reviewing, and approving all annual plans submitted by MTW agencies; establishing guidelines for MTW agencies; monitoring approved activities and accomplishments; and accepting annual reports. Individual MTW coordinators facilitate the reviews of planned and implemented activities and are responsible for coordinating with other HUD offices, including local HUD field offices, to obtain additional input on MTW agencies’ planned activities and accomplishments. In January 2011, the Office of Public Housing Investments signed a Memorandum of Understanding with HUD’s Office of Field Operations to increase collaboration and formally describe the roles and responsibilities of the MTW Office and HUD field office staff. Per this memorandum, field office staff assist the MTW Office by reviewing and providing input on annual plans and reports, ensuring that agencies are reporting tenant information, and participating in annual site visits. MTW agencies provide descriptions of their activities and performance information in their annual reports to HUD. They show how the activities link to the program’s statutory purposes in their annual plans, as required, and sometimes also in their annual reports. However, the type of performance information they provide varies, and HUD has provided limited guidance. While varied information on individual activities is available, a comprehensive evaluation of the MTW program is lacking, in part because HUD does not have a plan for identifying and analyzing standard performance data and has not established performance indicators for the MTW program as a whole. Further, HUD does not have a systematic process for identifying lessons learned by individual MTW agencies that can be replicated at other PHAs. MTW agencies report information on specific activities, including descriptions, in their annual reports. Agencies are required in their annual plans to link each of their proposed activities to one of the program’s three statutory purposes, and some agencies also show links between ongoing activities and statutory purposes in their annual reports. The three statutory purposes are to reduce costs and achieve greater cost- effectiveness in federal housing expenditures, give families with children incentives to obtain employment and become self-sufficient, and increase housing choices for low-income families. According to the most recently available annual reports, 30 agencies have over 360 ongoing activities, including rent reform initiatives and work requirements (see table 1). According to the most recent annual reports (and corresponding plans) for 30 MTW agencies, agencies associated the largest percentage of ongoing activities (41 percent) with the statutory purpose of reducing costs and improving cost-effectiveness (see fig. 2). For example, agencies associated changes in certification schedules, inspection protocols, and medical deductions with reduced costs. The agencies linked 30 percent of their ongoing activities to the statutory purpose of increasing housing choices and 24 percent to encouraging self- sufficiency. The agencies did not link 4 percent of their ongoing activities to any purpose in either their most recent annual plan or report. In its Standard Agreement, HUD requires agencies to include in their annual reports performance information on the impact of each implemented activity, including describing the metrics used to assess outcomes and comparing actual performance with proposed benchmarks.defined them in 2009 training materials. In these materials, HUD defined a metric as the “unit of measure that quantifies the changes that might occur as a result of the MTW activity” and a benchmark as the “projected outcome of the MTW activity.” Further, in these 2009 training materials HUD defined an outcome as the “actual, measured result of the implemented activity.” As examples, the training materials stated that a metric could be the hours of staff time saved, a benchmark could be the number of anticipated staff hours saved, and the outcome could be staff hours actually saved. HUD directs agencies to develop their own metrics While HUD did not define these terms in the agreement, it and benchmarks for each activity based on local and community standards. Our analysis of the most recent annual reports for 30 MTW agencies showed that the agencies reported performance information for 91 percent of the ongoing activities included in the reports and used over 1,000 metrics to assess these activities. MTW agencies collectively met the benchmarks associated with 40 percent of these metrics and fell short of meeting 17 percent of them. For 30 percent of the metrics, it was too soon to determine if the benchmarks had been met because the activities were not yet completed. For the remaining 13 percent, information (either the benchmark or performance data) was lacking to determine whether the benchmark was met. While MTW agencies are generally devising their own metrics for activities and reporting performance information, the usefulness of this information is limited because, in some cases, it is not outcome-oriented. Our analysis of the most recent annual reports for 30 MTW agencies showed that the type of information that agencies reported on the impact of their activities varied. For example, for similar activities designed to promote family self-sufficiency, one MTW agency reported only the number of participants, which is generally considered an output, and another did not provide any performance information. In contrast, a third agency reported on, among other things, the average income of program graduates, which we consider an outcome. Internal control standards state that good guidance (information and communication) is a key component of a strong internal control framework and that there is a need for clear documentation. To be consistent with the GPRA Modernization Act of 2010, HUD’s guidance on reporting performance information should indicate the importance of outcome-oriented information. Specifically, the act states that an agency should establish efficiency, output, and outcome indicators for each program activity. Furthermore, Office of Management and Budget guidance on implementation of the act states that quantitative and outcome-focused measures are preferred. At the time of our review, HUD’s guidance did not specify that agencies should report quantifiable and outcome-oriented performance information. According to the Director of the MTW Office, Attachment B of the Standard Agreement is the most current guidance on the information that agencies should report in their annual reports. It simply states that agencies are to provide detailed information on the impact of the activity and compare it against the proposed benchmarks to assess outcomes, including whether an activity is on schedule. The attachment does not define terms or set expectations for the type of information to be reported. After the Standard Agreement was executed in 2008, HUD conducted training for participating agencies. As previously discussed, the 2009 training materials defined key terms such as a metric and outcome and outlined steps agencies could take to evaluate their activities. HUD also encouraged the MTW agencies to use metrics and benchmarks that did not focus on the number of individuals participating in an activity but rather on the objectives of the activity and to report quantifiable information. While HUD has posted the 2009 training materials on its website, these materials have not been incorporated into Attachment B of the Standard Agreement. According to the Director of the MTW Office, HUD has not made its guidance more specific because agencies are implementing a wide variety of activities and thus require some reporting flexibility. We acknowledge the need for flexibility, but it is important that HUD require agencies to report at least some outcome-oriented performance information. Without more specific guidance on the reporting of performance information, HUD cannot be assured of collecting information that reflects the outcomes of individual activities. Such information would help HUD assess the demonstration program and whether the activities are furthering program purposes. As we have previously reported, obtaining performance information from demonstration programs that are intended to test whether an approach (or any of several approaches) can obtain positive results is critical. This information is needed to help determine whether the program has led to improvements consistent with its purposes. HUD has sponsored three broad reviews of the MTW program, but these studies are not comprehensive evaluations because of data limitations, among other things. A 2004 Urban Institute evaluation of the MTW program found that most MTW agencies reported modest benefits from activities related to administrative streamlining and that these results were often not as dramatic as the agencies had anticipated.difficulty in determining whether MTW activities related to employment The study also noted the and income had any independent effect and that MTW activities resulted in both greater and more limited housing choice. A second study, conducted in 2007 by Applied Real Estate Analysis, Inc. and the Urban Institute, reviewed eight MTW agencies that had placed limits on the length of time that residents could receive housing assistance. concluded that only limited information was available with which to evaluate outcomes or establish cause-and-effect relationships between agencies’ policies and recipients’ experiences. It noted that there were significant limitations to what could be learned from these experiences because no evaluative framework had been built into the program. Finally, a 2010 HUD Report to Congress found that the effects of many MTW activities, especially as they related to residents, could not be conclusively identified because of the variety of and differences in the activities and metrics that MTW agencies were implementing. However, the report did identify some results concerning agencies’ ability to more efficiently allocate resources and engage in strategic long-term planning. For instance, the study noted that some agencies had seen positive effects from combining their traditional sources of funding and streamlining their operations—for example, by simplifying their housing quality inspections. The study found that each of the approaches varied and These three studies of the MTW program and our work have identified several challenges that have hindered evaluation efforts. These challenges include the way the program was initially designed and the resulting lack of standard performance data as well as the lack of performance indicators for the program as a whole. Robert Miller, Martin D. Abravanel, Helene Berlin, Elizabeth Cove, Maria-Alicia Newsome, Carlos A. Manjarrez, Lipi Saikia, Robin E. Smith, and Maxine V. Mitchell, The Experiences of Public Housing Agencies That Established Time Limits Policies Under the MTW Demonstration (Applied Real Estate Analysis, Inc. and The Urban Institute, May 2007). HUD has taken steps to address the problems noted with MTW’s initial design and the lack of standard data; however, it has not analyzed the data it currently collects or determined whether these data are sufficient to evaluate similar activities and the program as a whole. As we have previously reported, comparable data are essential to a full analysis of programs that incorporate a variety of activities. We also noted that obtaining performance information from demonstration programs that are intended to test whether an approach (or any of several approaches) can obtain positive results is critical. Finally, we have reported that agencies need to identify any data that will be needed to assess the effectiveness of program regulations. Researchers and others have noted the limitations that the program’s initial design posed to evaluation. In the early years of MTW, rigorous evaluation strategies were not required, and the program lacked a research design that would have helped in determining baseline information. The 2004 Urban Institute review of MTW concluded that there were limits to what could be learned from its review for a variety of reasons, such as the inability to separate individual components of agencies’ MTW activities for analysis. As a result, the report is mainly descriptive and qualitative. HUD’s 2010 Report to Congress noted that because rent reform activities varied greatly and were not implemented using a controlled experimental methodology, the authors were unable to recommend specific reforms as best practices. To help evaluate aspects of the MTW program moving forward, MTW officials have added requirements for new agencies. According to HUD’s 2010 Report to Congress, the three agencies admitted to the MTW program in 2009 had strong evaluation components. Two of these agencies have commitments from local universities to evaluate their programs. Additionally, HUD has required the two newest agencies to participate in a controlled rent reform study. However, these improvements will not help evaluate the program as a whole or the activities implemented by the 30 other MTW agencies. Likely due to the absence of an evaluation framework for the MTW program, researchers have noted the lack of standard performance data needed to evaluate similar activities and the program as a whole. The 2004 Urban Institute study noted that the lack of consistent data on resident characteristics, incomes, and rent payments prevented the authors from being able to determine whether individual agencies were able to achieve the goal of increasing self-sufficiency. Similarly, the HUD Inspector General reported in 2005 that HUD lacked the empirical data needed to assess the program as a whole. Since these reports, HUD has started collecting some additional data from MTW agencies, but it has not yet analyzed the data. HUD created the MTW-PIC module to collect tenant characteristics such as household size, income, and educational attainment. However, according to MTW officials, HUD has not used these data to analyze the program’s effects, such as changes in resident income. In addition, HUD’s Standard Agreement has required agencies since 2009 to provide information in their annual reports on the impact of activities, including benchmarks and metrics. While these reports are informative, they do not lend themselves to quantitative analysis because the reporting requirements do not call for standardized data, such as the number of residents that found employment. In addition, whether these data are sufficient to assess similar activities and the program as a whole is not clear, and HUD has not identified the data it would need for such an assessment. For example, neither MTW- PIC nor annual reports capture standard data on implemented activities. Further, according to the Director of the MTW Office, MTW-PIC does not include information on individuals who receive nontraditional services from an MTW agency, such as homeless assistance or case management. Representatives from MTW agencies have suggested that HUD should collect some standard data for similar activities. For example, they noted that if HUD required all agencies that implemented rent reform activities to report standard data, the results of these efforts could be analyzed even if the specific activities varied. The Director of the MTW Office also noted that MTW-PIC was a potential tool for collecting and analyzing standard demographic data. The MTW Office has recently developed a statement of work for an evaluation of the program, but HUD has not allocated funding for the study, according to the Director of the MTW Office. Among other things, the proposed evaluation is intended to assess the current state of the MTW demonstration and determine the extent to which the three statutory purposes have been addressed. The study is also expected to include an analysis of outcomes associated with specific activities and the demonstration as a whole to identify which MTW activities are appropriate for expansion to all PHAs. However, the approach envisioned may be limited because it would primarily rely on existing data sources. Until HUD develops and implements a plan (that includes the identification of standard data) to quantitatively assess similar activities and the MTW program as a whole, HUD cannot determine their effectiveness. While such analyses may be challenging, they would enhance HUD’s ability to rigorously assess the demonstration. HUD has not established performance indicators for the MTW program as a whole. The GPRA Modernization Act of 2010 requires that federal agencies establish efficiency, output, and outcome indicators for each program activity as appropriate. Internal control standards also require the establishment of performance indicators. In addition, we have previously reported that successful performance indicators demonstrate results and provide useful information for decision making in order to track how programs and activities can contribute to attaining an organization’s goals and mission, among other things. As previously discussed, MTW agencies set their own performance metrics for activities, but HUD has not established performance indicators for the program as a whole. HUD’s Fiscal Year 2011 Annual Performance Plan established agencywide performance indicators but did not explicitly connect the MTW program to any of them. specific targets for the program, the Director of the MTW Office noted that the program’s activities support some of the agencywide indicators. Specific performance indicators for the MTW program could be based on the statutory purposes. For example, agencies could report on the savings achieved (reducing costs) and the number of additional households served (increasing housing choices). Without performance indicators for the MTW program, however, HUD cannot demonstrate the results of the program as a whole. While HUD has identified some lessons learned on an ad hoc basis, it does not have a systematic process in place for identifying such lessons. We have previously reported that obtaining impact information from demonstration programs that are intended to test whether an approach (or any of several approaches) can obtain positive results is critical. This information should be gathered to help determine whether programs have led to improvements consistent with their purposes. HUD, Fiscal Year 2011 Annual Performance Plan (Washington, D.C.: 2011). PHAs nationwide. These practices, which are posted on HUD’s website, included implementing savings accounts for public housing and voucher recipients to promote resident savings. Most recently, HUD’s 2010 Report to Congress described promising policies, practices, and concerns. In addition, officials from some of the MTW agencies we interviewed noted that HUD officials had shared information on activities that had shown positive effects during site visits, quarterly phone calls, newsletters, and annual conferences. Finally, HUD’s statement of work for its proposed evaluation of the MTW program includes the creation of five case studies that would review MTW flexibilities. However, these efforts have shortcomings. In most cases, the practices chosen were based on the opinions of HUD or contracted staff and largely involved anecdotal (or qualitative) data rather than quantitative data. The lack of standard performance data has affected HUD’s ability to systematically identify lessons learned. In its 2005 report on the MTW program, the HUD Inspector General noted that the lack of data on the program made it difficult to identify activities that could be considered models for addressing the three statutory purposes or that could be used to show the importance of individual policy changes. Further, HUD has not established criteria, such as demonstrated performance, for identifying lessons learned. Finally, HUD has not made regular efforts to review and identify lessons learned. Because HUD does not currently have a systematic process for identifying lessons learned, it is limited in its ability to promote useful practices that could be implemented more broadly. HUD, Office of Inspector General, 2005-SE-001. HUD has policies and procedures in place to monitor MTW agencies. First, HUD requires program staff to review and comment on agencies’ annual plans and reports. Second, staff review tenant data submitted by MTW agencies. Third, program staff conduct annual site visits at each participating agency to provide technical assistance and program updates. HUD generally follows these policies and procedures, which focus on technical assistance rather than compliance. Due in part to this focus, HUD’s policies and procedures have several key weaknesses. Specifically, HUD has not clarified program terminology, ensured that each MTW agency is meeting statutory requirements, performed an annual risk assessment, or developed policies and procedures to verify the accuracy of key information that MTW agencies self-report. HUD’s monitoring policies and procedures for the MTW program are contained in a desk guide, which describes the roles and responsibilities of HUD staff in reviewing annual plans and reports and data submissions, making site visits, and performing other monitoring activities. In January 2011, HUD’s Office of Public Housing Investments and Office of Field Operations signed a Memorandum of Understanding documenting the framework for headquarters and field staff to follow in overseeing MTW agencies. According to the memorandum, the MTW Office (within the Office of Public Housing Investments) is responsible for oversight of the MTW program. In many cases, the MTW Office works with field offices to jointly develop responses to MTW agency issues. Further, a MTW Working Group—consisting of representatives from Public and Indian Housing programs, the Real Estate Assessment Center, and the Office of Policy Development and Research—was established to assist with the annual plan and report review process. As part of the memorandum, the Offices of Public Housing Investments and Field Operations agreed to the protocols set forth in the desk guide. HUD staff from the MTW Office and field offices and the MTW Working Group share responsibility for reviewing and commenting on participating agencies’ annual plans and reports. The Standard Agreement (Attachment B) outlines the requirements for annual plans and reports that agencies must submit. MTW coordinators, who are each responsible for a specific number of MTW agencies, have the lead role in reviewing annual plans and reports to determine if they meet the requirements of Attachment B and obtaining input from other HUD staff, including field offices and the MTW Working Group. Field offices are required to review the annual plans and reports submitted by the MTW agencies in their jurisdictions and provide their assessment to the MTW coordinator. Similarly, the MTW Working Group reviews and provides comments to the MTW coordinator. Coordinators summarize the comments from the field offices and MTW Working Group and send them to the agencies. The coordinators and field office staff work with MTW agencies to resolve any outstanding issues. Once such issues have been resolved, the MTW Office approves annual plans and accepts annual reports. Soliciting public comments on the plan is important because, unlike other PHAs, MTW agencies are not subject to the Public Housing Assessment System, which includes a customer satisfaction survey that promotes resident participation. five statutory requirements.MTW coordinators to verify each agency’s certification that it has met the three statutory requirements. However, these procedures do not require Interviews with MTW coordinators and field staff and documentation for our sample of seven MTW agencies indicated that HUD generally followed these procedures. Documentation we reviewed for the agencies in our sample showed that the coordinators generally completed checklists while reviewing annual plans and reports. For example, coordinators verified that all ongoing activities were reported, ensured the agency included its certification that it had met three of the five statutory requirements, and made certain the agency certified that it had held a public hearing on its annual plan, among other requirements. Coordinators also provided comments to agency staff on annual plans and reports. of the comments, the MTW Office notified the agency in writing that its plan had been approved and report accepted. Field office staff that we interviewed said that they reviewed annual plans and reports and sent their comments and concerns to MTW coordinators. MTW agencies are subject to a variety of other reporting requirements. For example, MTW agencies are required to report voucher utilization in the Voucher Management System. They also must procure a public accountant to perform an Office of Management and Budget Circular A-133 compliance audit and submit unaudited financial statements. In addition, they are subject to HUD physical and management inspections of public housing and on-site monitoring reviews related to voucher reporting. example, in January 2012, MTW agencies overall achieved a 100-percent tenant data submission rate. In addition, HUD conducts annual site visits to provide technical assistance to each MTW agency. The MTW Office and the local field office conduct these visits jointly. The MTW Office (in particular the coordinator assigned to the agency) takes the lead role in conducting the visit, including preparing the agenda, coordinating with the local HUD field office, and working with the MTW agency to select properties to visit. According to HUD officials, the primary objective of the site visit is to provide technical assistance and build a working relationship with the participating agencies, not to assess compliance with statutory requirements. However, HUD officials stated that if compliance issues with statutory purposes are found, HUD staff address these issues during the site visit, and coordinators often develop timelines for the agency to come into compliance. Our analysis of documentation of site visits to participating agencies indicated that MTW Office and field staff generally followed HUD’s annual site visit procedures. Specifically, analysis of site visit reports indicated that HUD officials generally discussed the effectiveness of activities and helped resolve any outstanding issues. For example, as a result of site visits, HUD staff recommended that an agency include cost-saving measures in its annual plan, requested clarification of output measures, and encouraged one agency to submit articles to the MTW newsletter to share its experiences on how rent reform encouraged self-sufficiency. Interviews with our sample of MTW agencies and corresponding field office officials also indicated that HUD was following its policies and procedures for annual site visits. MTW agency officials we spoke with indicated that the site visits were generally beneficial because they provided an opportunity for in-person discussions that helped facilitate communication with HUD. HUD’s field office staff noted their active involvement over the years, which had become more defined with the issuance of the desk guide in 2011. According to the Director of the MTW Office, the office is considering conducting future site visits using a risk- based approach. Using this approach, HUD would conduct site visits less frequently but would focus on larger agencies that had implemented a wide range of complex activities and newly admitted agencies that were implementing new activities. To foster information sharing across agencies and provide technical assistance, HUD employs a number of additional strategies. For example, HUD hosts annual conferences to share information with MTW agencies and facilitate information sharing among agencies. The conferences cover a variety of topics, and all participating MTW agencies are invited to attend. For example, the 2011 conference focused on effectively managing funds in a challenging budgetary environment. HUD also has engaged participating agencies in quarterly conference calls and other training related to program changes such as the conversion from PIC to MTW-PIC and the transition to the Standard Agreement. Further, HUD issues notices on various topics, such as MTW reporting requirements, and publishes quarterly newsletters that highlight activities relating to each statutory purpose, among other topics. HUD also publishes each agency’s annual report and researchers’ evaluations of MTW activities on its website. Although HUD follows the policies and procedures that it has in place, it could do more to ensure that MTW agencies are demonstrating compliance with statutory requirements and to identify possible risks relating to activities implemented by each agency, among other things. First, HUD has not issued guidance to participating agencies clarifying key program terms, including definitions of the purposes and statutory requirements of the MTW program. Internal control standards require the establishment of clear, consistent goals and objectives. As previously noted, MTW authorizing legislation established three purposes for the program, and agencies must link each of their activities to one of these purposes. However, HUD has not clearly defined what the language in some of these purposes means, such as “increasing housing choices for low-income families.” MTW agencies have linked activities to this purpose that range from using block grant funding to support homeownership programs to requiring applicants to complete a renter education program to establishing a prisoner re-entry housing program. HUD noted the lack of a clear definition in its 2010 Report to Congress but continued to require that MTW agencies link activities to this purpose. According to MTW officials, they have not defined what is meant by “increasing housing choices” so that agencies have the ability to define this term in a manner that fits their local needs. In addition, HUD has not clarified what is meant by “serving a comparable mix of families” but also requires agencies to comply with this requirement. MTW agencies we spoke with described varying interpretations of this requirement. For example, officials from one agency told us that they observed how family sizes changed in their community and compared those changes to changes in families within the MTW program, using community survey data and data from the agency’s internal system. Officials from another agency we spoke with said that over time it had become increasingly difficult to determine compliance with this statutory requirement. HUD has recently taken steps to clarify some terminology, explaining how agencies can certify that at least 75 percent of the families they serve have very low incomes and that they are serving substantially the same number of households under MTW as they did before the program. In addition, HUD is revising its reporting requirements for MTW agencies. As part of this process, HUD officials told us that they plan to update their guidance to more completely collect information related to the program’s statutory purposes and requirements. They acknowledged that the guidance could be strengthened to require MTW agencies to provide their agency-specific definition for the three statutory purposes. As a first step, they noted that they planned to require agencies to define “self- sufficiency” by either choosing one of the definitions provided by HUD or creating their own. Similarly, the officials stated that they would consider requiring MTW agencies to choose between using HUD’s definition of increasing housing choices or creating their own definition. Although a step in the right direction, allowing MTW agencies to create their own definitions of key terms would make it difficult to assess the effectiveness of efforts to address statutory purposes. HUD officials also said that the revised guidance would provide standardized tables for agencies to report data related to the requirement to serve a comparable mix of families. Until HUD clearly defines what is meant by all of the statutory purposes and requirements of the MTW program, HUD cannot effectively determine whether agencies are addressing these purposes and meeting requirements. Second, HUD has only recently assessed agencies’ compliance with two self-certified requirements and has not assessed compliance with the third. Internal control standards require control activities to be in place to address program risks. In addressing these risks, internal control guidance states that management should formulate an approach for assessing compliance with program requirements. While HUD has recently made efforts to assess agencies’ compliance with two of the three self-certified requirements, it does not have a process in place to systematically review compliance with all three requirements. In 2011, HUD for the first time assessed participating agencies’ compliance with the requirement to assist “substantially the same” number of eligible families that would have been served in the absence of MTW. HUD collected data from MTW-PIC, the Voucher Management System, and each participating agency’s most recent annual report on the number of public housing units occupied, vouchers utilized, and other families housed and used a formula to compare these data with similar data reported before MTW. HUD and MTW agency staff we interviewed told us that they worked together to discuss discrepancies in the calculations. According to the Director of the MTW Office, agencies were in compliance with this requirement if they were serving at least 95 percent of the number of families in their baseline figure. HUD’s recent review of each agency’s baseline calculation indicated that all but one of the agencies were in compliance. Also in 2011, HUD reviewed MTW-PIC data for the first time to determine agencies’ compliance with the requirement that at least 75 percent of assisted residents be very low income. HUD’s analysis of MTW-PIC data showed that, as of September 2011, 91 percent of the residents served by MTW agencies fell into this category. While HUD has taken steps to assess compliance with these two statutory requirements, it has not yet developed a methodology for assessing agencies’ compliance with the requirement to maintain a comparable mix of families. The Director of the MTW Office acknowledged that self-certifications were not the best means of ensuring compliance and told us that the planned revisions to the reporting requirements for MTW agencies would help assess compliance with the requirements to maintain a comparable mix of families and ensure that at least 75 percent of families assisted are very low income. Without a process for systematically assessing compliance with statutory requirements, HUD lacks assurance that agencies are complying with them. Third, HUD has not performed an annual assessment of program risks. Internal control standards state that an agency should have a risk assessment plan that considers internal and external risk factors and establishes a control structure to address those risks. The standards also state that managers should focus on control activities to address risks that may involve verifications, performance reviews, and documentation, among other things. HUD’s own internal control standards also require its program offices to perform an annual risk assessment of their programs or administrative functions using a HUD risk-assessment worksheet. These standards also stress the importance of performing a risk assessment when there are significant program changes. According to the Director of the MTW Office, the office has not performed an annual risk assessment for the MTW program because it was not aware of this requirement. MTW agencies are exempt from scoring in the Public Housing Assessment System and the Section 8 Management Assessment System. However, MTW agencies are subject to physical inspections conducted by the Real Estate Assessment Center under HUD guidelines and issued a score. This score is entered into the Public Housing Assessment System and can be viewed by MTW staff at any time. A score of 22 or below is flagged by the Real Estate Assessment Center (the maximum score is 30) and reported to the appropriate field office. their perceived level of risk. While monitoring procedures are not risk- based, the Director of the MTW Office stated that his office would become aware of risks from HUD’s field office staff, which have routine responsibility for reviewing financial audits and Office of Management and Budget compliance audits. As previously discussed, HUD is considering moving toward conducting risk-based site visits. In addition, according to HUD officials, the office is considering other methods to more rigorously analyze MTW agency risk factors. By not performing an annual risk assessment and implementing a risk-based approach to monitoring MTW agencies, HUD lacks assurance that it has properly identified and addressed risks that may prevent participating agencies from addressing program purposes and meeting statutory requirements. HUD also lacks assurance that it is efficiently using its limited monitoring resources. GAO/AIMD-00-21.3.1 and GAO-01-1008G. reflect the source documents.accuracy of any reported performance information, it lacks assurance that this information is accurate. To the extent that HUD relies on this information to assess program compliance with statutory purposes and requirements, its analyses are limited. Legislation has been proposed to expand the number of PHAs that can participate in the MTW program, and a recent HUD report recommended expanding the program up to twice its size. As of March 2012, a maximum of 39 PHAs could participate in the program, but a 2011 Senate bill would direct HUD to increase that number up to 250. In addition, legislation has been drafted that would establish MTW as a permanent program and eliminate the current restrictions on the number of agencies that can participate. HUD’s 2010 Report to Congress recommends increasing the number of participating agencies to about 60. HUD and some stakeholders believe that expansion could provide the needed information on the effect of the MTW program and allow more PHAs to test innovative ideas, but questions remain about the lack of performance information on current MTW activities. In addition, alternatives to expansion exist, including implementing a more narrowly focused program. According to HUD, some affordable housing advocates, and MTW agencies we interviewed, expanding the MTW program could help demonstrate the program’s effect and increase the number of lessons that can be learned from the program. HUD has reported that doubling the number of MTW agencies with the use of strategic criteria and program implementation could help demonstrate the effects of MTW on a broader scale and enable the housing industry to learn even more from the demonstration. For example, expansion could provide more information on how MTW flexibilities would affect a broader group of PHAs. The Director of the MTW Office noted that some MTW activities, specifically those related to administrative streamlining, had influenced the draft Affordable Housing and Self-Sufficiency Improvement Act of 2012 (AHSSIA). Some affordable housing advocates that we met with emphasized the value of the changes, such as decreases in concentrated poverty, that have occurred in some of the communities affected by the MTW program and indicated that expansion could enable more PHAs to address local needs and therefore benefit additional communities. Similarly, officials from MTW agencies that we contacted stated that expansion of the program would provide a broader testing ground for new approaches and best practices. Abravanel and others, An Assessment of HUD’s Moving to Work Demonstration (2004). implementation innovations. Officials from an organization that advocates on behalf of large PHAs and supports expansion noted that affordable housing needs varied by locality and that the MTW program enabled participating agencies to design effective approaches based on local needs. Similarly, another affordable housing advocacy organization told us that they supported expanding MTW not only because it enabled participating agencies to tailor activities to local needs but also because it involved local communities in the process. Officials from several of the MTW agencies we interviewed also noted that the MTW program had empowered them to create and implement strategies that addressed local issues and said that expanding the program would give more PHAs the same flexibility. For example, in one northeastern state where the housing stock was relatively old, the MTW agency was able to focus on developing new affordable housing. Another MTW agency in a western state with mostly newer housing stock chose to reduce the frequency of inspections of its properties and focus its efforts on administrative streamlining and the disposition of its older units. Further, several MTW agencies that we interviewed described how they implemented the requirement to establish a rent policy that encouraged employment and self sufficiency. For example, officials from one MTW agency told us that they believed the traditional requirement that residents pay 30 percent of their adjusted income in rent was a disincentive to work, because as resident income increases so would the payment toward rent. To encourage residents to seek work, this agency implemented work requirements and a minimum rent. Additionally, some agencies have used their MTW status to establish programs that focus on specific populations, including working families with children, the elderly and disabled, and the homeless. Some proponents of expansion that we interviewed also noted that expanding the MTW program could provide more PHAs with the ability to use funding from different sources more flexibly than possible without MTW status. Agencies without MTW status have to implement their activities while adhering to the regulations associated with three different funding streams, evidence of the fragmented nature of housing assistance.certain provisions of the 1937 Housing Act in order to combine annual funding from separate sources into a single authoritywide funding source. HUD field office staff with responsibility for monitoring MTW agencies observed that the single-fund flexibility was beneficial because it enabled participating agencies to develop supportive service programs, such as job training or educational programs, which help move families toward self-sufficiency. One HUD field office official stated that this flexibility would be a significant benefit for other PHAs. An affordable housing advocate we met with also noted that this ability to use different kinds of funds interchangeably was beneficial because it enabled MTW agencies to shift funds based on local priorities. Further, officials from the MTW agencies we interviewed agreed that this flexibility was beneficial. For example, officials from one MTW agency stated it had been able to use the single fund to organize itself as a business organization, develop a strategic plan based on the housing needs of low-income families in the community, leverage public funds and public and private partnerships, and develop mixed-income communities. Two of the MTW agencies that we interviewed also stated that the single-fund flexibility had enabled them to fund programs that encouraged self-sufficiency among residents. For example, officials explained that they had used funding for coaching and counseling services, job training support, and education programs. Finally, officials from three of the MTW agencies we interviewed noted a related benefit of participation. They said that their MTW status had enabled them to respond more quickly to real estate opportunities because they do not have to wait for HUD approvals to purchase properties. A lack of performance information, limited HUD oversight, and concerns about the program’s impact on residents raise questions about expanding the MTW program. As we noted previously, conclusive information about the effectiveness of the MTW program is limited in part because HUD does not have a plan for identifying and analyzing standard performance data, has not established performance indicators for the program as whole, and does not have a systematic process for identifying lessons learned. HUD’s 2010 Report to Congress noted that the conclusive impacts of many MTW activities, particularly as they relate to residents, could not yet be known. For example, the report noted that the rent reforms implemented under MTW varied greatly and were not implemented using a controlled experimental methodology. As a result, which aspects of rent reforms should be recommended for all PHAs were not clear. The report also noted the limitations that exist when evaluating the outcomes of MTW—limitations that stem from the weak initial reporting requirements and lack of a research design. The report concluded that, given these limitations, expansion should occur only if newly admitted PHAs structure their programs for high-quality evaluations that permit lessons learned to be generalized beyond a single PHA experience. Similarly, affordable housing advocates and legal aid organizations that we interviewed stated that because lessons had not been learned from MTW, there was no basis for expanding the program. For example, officials from a national affordable housing advocacy organization stated that some MTW agencies have used their flexibility to establish limits on the length of time someone can live in assisted housing, but there is little research on the effect of such efforts. The officials stated that there was no evidence that this policy had helped anyone become self-sufficient and move out of public housing. The officials added that data were not available on the extent to which MTW agencies have provided incentives for residents to become self-sufficient or have increased housing choices. Similarly, an official from a national housing law advocacy organization stated that data were not available to determine the effect of the MTW program, particularly at the national level. In addition, our own work, some research organizations, and affordable housing advocates question HUD’s ability to effectively manage an expanded MTW program. As previously noted, HUD’s current monitoring procedures have several key weaknesses, including the lack of a systematic process for assessing agencies’ compliance with statutory requirements and an assessment of program risks. Some research organizations also have questioned HUD’s capacity to oversee additional MTW agencies. For example, the Urban Institute reported that the approval process that HUD was using at the time of the institute’s 2004 review would not be feasible for an expanded program because of the administrative burden involved. At the time of the 2004 study as well as our review, HUD reviewed each individual request to waive specific provisions of the 1937 Housing Act before approving annual plans. Staff from another research organization questioned whether HUD has the capacity to oversee additional agencies. Similarly, one affordable housing advocate that we interviewed stated that HUD’s capacity to oversee an expanded program is not clear, in part because current monitoring activities are not transparent. At the time of our review, HUD had four full- time MTW coordinators, who each managed from 6 to 10 MTW agencies. According to the Director of the MTW Office, it takes more resources for HUD to oversee MTW agencies than non-MTW agencies. Thus, if additional agencies were added under the current program design, HUD would likely need additional resources. Researchers and several of the affordable housing advocates and legal aid agencies that we met with also raised concerns that the current program, and therefore also an expanded program, could negatively affect residents of MTW agencies. For example, two research organizations have stated residents could be negatively affected by MTW agencies that implement voucher policies that reduce portability—that is, residents’ ability to use their vouchers in an area outside of the area where they received it. One of these research organizations stated that the differences in the way voucher programs were implemented across MTW agencies could reduce residents’ ability to use vouchers outside of the area where they received the assistance. Officials from the other organization noted that some MTW agencies had instituted policies that prohibited vouchers from being transported out of the originating jurisdictions, thereby limiting housing choices. According to HUD officials, MTW agencies with policies that limit portability can make exceptions. For example, these agencies have made exceptions for residents seeking employment opportunities. Legal aid organizations that have worked with residents of MTW agencies as well as affordable housing advocates told us some of the requirements that MTW agencies have implemented, such as work requirements, were potentially harmful to residents. For example, legal aid representatives from one community told us that the work requirement was not consistently enforced across various mixed-income properties that included public housing as well as market-rate units. According to these officials, they have had clients who have been evicted for not working, even though the client was in school or disabled—both exceptions to the work requirement. These officials also stated that property managers in the city’s various mixed-income developments did not implement MTW policies consistently. For example, the officials stated that residents have been told by property managers that they would be in compliance with the work requirement if they were in school or another training program, only to have the MTW agency determine that they were not in compliance. According to HUD officials, inconsistent enforcement of policies is not unique to MTW agencies, and residents would have recourse. Legal aid representatives that worked with residents of another MTW agency also told us that the work requirement was a punitive policy that negatively affected the poorest residents. The officials stated that there were better methods for encouraging work and self-sufficiency, such as job training. Officials from a national affordable housing advocacy organization agreed that work requirements are punitive and stated that they disagreed with a policy of making housing assistance contingent on other factors, such as having a job. In their view, housing assistance should be a stable form of assistance for low-income households. Alternatives to expansion include implementing a program that is targeted more to specific activities and waiving some regulations for all PHAs as described in proposed legislation. According to the Urban Institute, an alternative to expanding MTW could be to systematically test a limited number of programmatic alternatives—such as flat rents, time limits, or debt financing of capital improvements.individual agencies as much discretion to design combinations of reforms around local conditions and priorities. However, this approach could yield more systematic evidence about the costs and benefits of particular This approach would not allow program reforms if it included a rigorous evaluation design and mandatory data collection on key outcomes, such as the number and characteristics of participating households. Collection of such information in a standardized format would need to be a minimum requirement for participation if the point was to learn from the experiences of those testing activities. In addition, an official from an affordable housing advocacy organization that we met with stated that testing the effectiveness of discrete activities on a smaller scale would be useful. HUD also noted that altering the scope of the demonstration for new participants could improve what was learned from specific activities. For example, its 2010 Report to Congress stated that data on MTW could be strengthened if the scope of the demonstration were altered for new participants by selecting agencies committed to testing a particular activity, such as rent reform, and requiring rigorous evaluation. In December 2011, HUD issued a request for proposals for a demonstration that would test alternatives to the current rent structure in the voucher program. According to the proposal, the demonstration would most likely be undertaken at select MTW agencies. In addition, HUD’s 2012 appropriations act authorized a Rental Assistance Demonstration that would enable HUD to authorize and evaluate new approaches to preserving affordable rental housing, including converting public housing to project-based rental assistance. AHSSIA includes authorization for a revised version of the Rental Assistance Demonstration. Abravanel and others, An Assessment of HUD’s Moving to Work Demonstration (2004). Congress supported allowing more PHAs to participate in the program. Finally, we recently reported on cost savings that could be realized from allowing additional housing authorities to implement some of the reforms MTW agencies have tested. The MTW demonstration is designed to provide participating agencies with the flexibility to develop and test activities that achieve cost- efficiency, encourage residents with children to obtain employment and become self-sufficient, and increase housing choices for low-income families. While this flexibility has allowed participating agencies to implement hundreds of activities, HUD has not done all that it can to evaluate the program’s effectiveness, identify successful approaches that could be applied to public housing agencies more broadly, or ensure that MTW agencies comply with program requirements. Because Congress is considering expanding the program to many more PHAs, the absence of information needed to conduct a comprehensive program evaluation and compliance reviews is significant. HUD has recognized the importance of rigorous evaluation by requiring newly admitted agencies to have strong evaluation components. However, these improvements will not help evaluate the program as a whole. Without more complete knowledge of the program’s effectiveness and the extent to which agencies are adhering to program requirements, it is difficult for Congress to know whether an expanded MTW will benefit additional agencies and the residents they serve. Recognizing that it needed to do more to improve what was known about the program’s effectiveness, HUD started requiring MTW agencies to describe the impact of each implemented activity in their annual reports beginning in 2009. However, the information that MTW agencies reported did not always reflect outcomes, and HUD’s guidance does not require that information on activities be quantifiable and outcome-oriented to the extent possible. Without more specific guidance on reporting performance information, HUD cannot be assured of collecting data that reflects the outcomes of activities. Further, challenges such as the lack of analysis of standard data and the absence of performance indicators have prevented comprehensive evaluations of similar activities and the overall MTW program. HUD has recently started collecting additional information through MTW-PIC and annual reports, but has not yet analyzed the data. Further, whether the data collected are sufficient to assess similar activities and the program as a whole is not clear, and HUD has not identified the performance data it needs to undertake such analysis. Until HUD has a plan (that includes the identification of standard data) to quantitatively assess similar activities and the MTW program as a whole, HUD cannot determine their effectiveness. Additionally, HUD has not established performance indicators specific to MTW. Indicators linked to the statutory purposes of reducing costs, encouraging self-sufficiency, and increasing housing choices would help HUD demonstrate that the program has produced desired results. Similarly, HUD does not have a systematic process in place to identify lessons learned from the MTW demonstration. Identifying activities that could be replicated more broadly is a goal of the MTW program and could be aided by the analysis of some standard performance data. The absence of a criteria-based, regular process for identifying lessons learned complicates efforts to determine which MTW activities are most effective and should be replicated more broadly. At the same time, HUD’s monitoring efforts are not as strong as they could be. First, because HUD has not clarified key terms related to the three statutory purposes and five requirements, it cannot effectively determine whether MTW agencies are actually addressing these purposes and meeting requirements. Second, HUD does not have a process in place to systematically review compliance with all requirements. Such a review is especially important to a program like MTW that allows participants to self-certify their compliance with some program requirements. HUD has begun assessing compliance with two of the MTW requirements that call for self-certification, but not the third, and thus lacks assurance that agencies are complying with all three. Moreover, HUD’s procedures for monitoring MTW agencies are not risk- based. It does not conduct an annual assessment of risks and provides the same level of monitoring for all agencies, even though some may pose greater risks than others. A risk-based approach to monitoring would provide greater assurance that HUD has addressed all risks, particularly those that may prevent participating agencies from addressing program purposes and meeting statutory requirements. Further, unless it implements a risk-based approach (such as that currently being considered for annual site visits) to monitoring MTW agencies, HUD cannot be assured that it is using its limited monitoring resources most efficiently. Finally, just as HUD does not assess compliance with all three self-certified requirements, it does not verify the accuracy of key information that agencies self-report, including information on the impact of MTW activities. Annual site visits have been used primarily to provide technical assistance rather than to assess self-reported information. By not verifying the accuracy of any performance information, HUD lacks assurance that this information is accurate. To improve what is known about the effectiveness of the MTW program, we recommend that the Secretary of the Department of Housing and Urban Development improve HUD’s guidance to MTW agencies on providing performance information in their annual reports by requiring that such information be quantifiable and outcome-oriented to the extent possible; develop and implement a plan for quantitatively assessing the effectiveness of similar activities and the program as a whole including the identification of standard performance data needed; and establish performance indicators for the MTW program as a whole. To enhance the ability to identify MTW practices that could be applied more broadly, we recommend that the Secretary of the Department of Housing and Urban Development create a process to systematically identify lessons learned. To improve HUD’s oversight of the MTW program, we recommend that the Secretary of the Department of Housing and Urban Development issue guidance that clarifies key program terms, such as the three statutory purposes of the program and the five statutory requirements that MTW agencies must meet; develop and implement a systematic process for assessing compliance with statutory requirements; conduct an annual risk assessment for the MTW program and implement risk-based monitoring policies and procedures such as those currently being considered for site visits; and implement control activities designed to verify the accuracy of a sample of the performance information that MTW agencies self-report. We provided a draft of this report to HUD. The Assistant Secretary for Public and Indian Housing provided written comments, which are reprinted in appendix II. HUD disagreed with our recommendation that the agency develop performance indicators for the MTW program as a whole, was in partial agreement with four recommendations, and generally agreed with three. The agency said that developing programwide performance measures could be difficult and might be contrary to the nature of the demonstration. In addition, HUD emphasized the improvements that it had made to its reporting requirements in order to collect more consistent, outcome-oriented data. We acknowledged these improvements in the draft report, but as our recommendations indicated, we saw opportunities for additional improvements. HUD also noted that some of our recommendations might be a good fit for the existing program but that others might be more appropriate for a future expanded demonstration. In disagreeing with our recommendation that it establish performance indicators for the MTW program as a whole, HUD emphasized the difficulty of measuring all activities against the same standard. The agency noted that because each MTW agency had implemented a unique combination of activities, developing programwide performance measures would make determining the impacts of specific activities unclear and prevent the identification of individual policies that could be applied more broadly. However, the purpose of programwide indicators would not be to isolate the impact of individual activities but to demonstrate programwide results—including showing the extent to which the program was addressing its statutory purposes of achieving greater cost-effectiveness in federal housing expenditures, giving families with children incentives to obtain employment and become self-sufficient, and increasing housing choices for low-income families. HUD also stated that applying programwide performance measures would be complicated by the fact that activities that advance one statutory purpose might conflict with other purposes. We agree that it is important to evaluate similar activities and have a separate recommendation addressing this need. But, the purpose of programwide assessment is to demonstrate whether the provision of flexibility in itself results in the intended benefits of the MTW program, such as cost savings or increased family self-sufficiency. Demonstrating that the increased flexibility the program offers has produced the intended results is critical, particularly as Congress considers whether to expand the program. We continue to believe in the importance of demonstrating program results and therefore continue to recommend that HUD develop performance indicators for the MTW program as a whole. HUD was in partial agreement with four recommendations. First, HUD said that proposed revisions to the reporting requirements for MTW agencies had addressed our recommendation that the agency improve its guidance to MTW agencies on providing performance information in annual reports. HUD’s draft guidance is in line with our recommendation that HUD require agencies to report quantifiable and outcome-oriented information. However, because these proposed revisions have yet to be finalized, we did not revise our recommendation. Second, HUD agreed that quantitatively assessing the effectiveness of similar activities was an important step but noted the difficulties associated with assessing the effectiveness of the program as a whole. However, as noted above, we continue to believe in the importance of demonstrating program results. Consequently, we did not revise our recommendation. Third, HUD stated that providing a menu of standard metrics may be the best way to clarify the program’s statutory purposes and that it had made progress in recent years in addressing our recommendation that it issue guidance that clarifies the statutory requirements. HUD also noted that the proposed revisions to the reporting requirements would provide additional clarification on the statutory requirements. These efforts, which were acknowledged in the draft report, are a step in the right direction, and we encourage HUD to continue finalizing this guidance. As noted above, because these proposed revisions have yet to be finalized, we did not revise our recommendation. Fourth, HUD described recent efforts to assess compliance with two statutory requirements and analysis that it could conduct once proposed revisions to reporting requirements for MTW agencies were finalized. Because the process used to assess compliance with one of the requirements has not been formalized in policy and the proposed revisions have not been finalized, we did not revise our recommendation that HUD develop and implement a systematic process for assessing compliance with statutory requirements. HUD generally agreed with the three remaining recommendations. For example, HUD agreed that it should proactively identify lessons learned and described some of its recent efforts to do so. We acknowledged these efforts in our draft report but noted the absence of a criteria-based, regular process for identifying lessons learned. HUD also described plans to develop a formal risk-based strategy for monitoring and, when we asked for further clarification, stated that it agreed with our recommendation to conduct an annual risk assessment for the MTW program. Finally, HUD discussed potential strategies for verifying the information that MTW agencies report using existing or planned HUD systems. HUD also requested that we consolidate four separate recommendations into two, but we continue to believe that maintaining distinctions between the separate recommendations is important. First, HUD requested that we combine two recommendations: that it create a plan to quantitatively assess the effectiveness of similar activities and the program as a whole (including identifying the standard performance data needed), and that it establish performance indicators for the program as a whole. Although related, the two recommendations are distinct because the first focuses on the need for program evaluation and the second on performance measurement. Program evaluations typically examine a broad range of information on program performance, while performance measurement generates outcomes that show whether a program has achieved specific objectives. As a result, we did not combine the recommendations. Second, HUD requested that we combine the recommendation that it issue guidance clarifying key program terms (such as the three statutory purposes and five statutory requirements) with the recommendation that it implement a systematic process for assessing compliance with statutory requirements. However, defining program requirements and assessing them are separate and distinct activities. Therefore, we did not combine the recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Housing and Urban Development and other interested committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or sciremj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Our objectives were to examine (1) what is known about the extent to which the Moving to Work (MTW) demonstration program is addressing the program’s statutory purposes, (2) the Department of Housing and Urban Development’s (HUD) monitoring of MTW agencies’ efforts to address these purposes and meet statutory requirements, and (3) potential benefits of and concerns about expanding the number of public housing agencies (PHA) that can participate in the demonstration program. To evaluate what is known about the extent to which the MTW program is addressing the program’s statutory purposes, we reviewed the most recent annual reports as of January 2012 for 30 MTW agencies. We reviewed these annual reports and the corresponding annual plans to identify the ongoing activities that the agencies were implementing, determine the extent to which these activities were linked with one or more of the program’s statutory purposes, and assess the performance information provided for each activity. To assess the performance information provided by MTW agencies, we examined HUD’s reporting guidance and compared it with internal control standards for federal agencies. We assessed the reliability of selected information in the reports by reviewing supporting documentation from a sample of seven MTW agencies and interviewing the officials responsible for preparing and reviewing this information. These seven agencies were Cambridge Housing Authority (Cambridge, Massachusetts), Chicago Housing Authority (Chicago, Illinois), Housing Authority of the City of Atlanta (Atlanta, Georgia), Housing Authority of the City of Pittsburgh (Pittsburgh, Pennsylvania), Housing Authority of the County of Santa Clara/Housing Authority of the City of San Jose (Santa Clara County and San Jose, California), Lawrence-Douglas County Housing Authority (Lawrence, Kansas), Vancouver Housing Authority (Vancouver, Washington). We selected these agencies to provide diversity in geography, agency size, and length of time participating in the program. We determined that the reports were sufficiently reliable for the purposes of our review. Through interviews and a literature search, we identified three studies of the MTW program as a whole. We reviewed these studies to identify information on the program’s effectiveness and any challenges associated with assessing it. We determined that these studies were methodologically sound and reliable for our purposes. We examined HUD’s recent efforts to collect data from MTW agencies, including documentation on the reporting requirements for MTW agencies. In addition, we reviewed HUD’s fiscal year 2010-15 strategic plan and Fiscal Year 2011 Annual Performance Plan for any performance indicators for the MTW program. We also reviewed the GPRA (Government Performance and Results Act) Modernization Act of 2010, Office of Management and Budget guidance, internal control standards, and a GAO report on attributes of successful performance measures. Further, we identified five studies of specific MTW agencies or activities identified by HUD and representatives of the sample of MTW agencies we interviewed. Finally, we reviewed published reports and HUD’s website for information on HUD’s efforts to identify lessons learned. To assess HUD’s monitoring of MTW agencies’ efforts to address the program’s statutory purposes and meet requirements, we obtained and reviewed documentation of monitoring policies and procedures, including the Standard Agreement that HUD executed with MTW agencies in 2008, the MTW Desk Guide, a 2011 Memorandum of Understanding between HUD’s Office of Public Housing Investments and Office of Field Operations, and other HUD guidance. Based on these documents and interviews with HUD staff, we identified three key monitoring processes: the review of annual plans and reports, reviews of data entered into the Moving to Work section of the Public and Indian Housing Information Center (MTW-PIC), and annual site visits to each MTW agency. To assess the extent to which HUD staff were following these monitoring policies and procedures, we reviewed documentation of monitoring activities for our sample of seven MTW agencies. For example, to verify the steps HUD had taken to review annual plans and reports, we reviewed the checklists that the MTW coordinators used to document their review of these plans and reports. We also reviewed HUD’s comment letters for fiscal year 2011. To verify the steps HUD had taken to review data submitted into the MTW-PIC system, we reviewed monthly reports that showed the degree to which MTW agencies overall complied with reporting requirements from August 2011 through January 2012. Finally, to verify that both headquarters and field office staff made site visits and the extent to which they made annual visits, we reviewed the most recently available site visit reports completed by the MTW Office for all agencies as of October 2011. In addition, we interviewed the MTW agencies in our sample and the corresponding HUD field office officials to discuss the annual site visits. We also compared HUD’s monitoring policies and procedures to internal control standards for the federal government and HUD’s own program management guidance. As a part of this analysis, we compared HUD’s guidance to MTW agencies with the internal control requirement for clear goals and objectives. We also reviewed information on HUD’s efforts to clarify how agencies could certify compliance with the requirement to assist “substantially the same” number of eligible families that would have been served in the absence of MTW. In addition, we compared HUD’s efforts to assess agencies’ compliance with statutory requirements with the internal control standard related to assessing compliance with program requirements. Further, we reviewed internal control standards for the federal government and HUD’s own internal control standards and identified the requirement that programs have an annual risk assessment. We interviewed HUD regarding any risk assessment performed for the MTW program. Finally, we interviewed HUD officials to determine whether any of the performance information that MTW agencies reported had been verified. We compared HUD’s lack of verification with the internal control standards and guidance that emphasized the need for control activities to ensure that program participants report information accurately. To discuss the potential benefits and concerns associated with expanding the number of PHAs that can participate in the program, we reviewed studies, reports, and testimonies by researchers, affordable housing advocates, resident advocates, and the HUD Office of Inspector General. For all three objectives, we interviewed officials from the seven MTW agencies in our sample and representatives from affordable housing advocacy organizations such as the Council of Large Public Housing Agencies, the National Association of Housing and Redevelopment Officials, the National Leased Housing Association, and the Public Housing Authorities Directors Association. We spoke with resident advocacy organizations such as the National Low-Income Housing Coalition, the National Housing Law Project, and legal aid agencies that represented residents serviced by five of our sample MTW agencies. We also interviewed staff from the Center for Budget Policy and Priorities, a research organization that has studied and written about the MTW program; researchers who had evaluated the MTW program; and HUD officials from the MTW office and the field offices that corresponded to our sample of agencies. During our interviews, we discussed the potential benefits of expansion and the concerns of these organizations. Based on our review of available studies and reports and interviews with the above mentioned stakeholders, we identified key benefits and concerns. We also made observations based on our findings related to the availability of performance information for the program and HUD’s monitoring efforts. We conducted this performance audit from July 2011 to April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Paige Smith (Assistant Director), Anna Carbino, Emily Chalmers, John McGrail, Marc Molino, Lisa Moore, Daniel Newman, Lauren Nunnally, and Andrew Stavisky made key contributions to this report.
HUD’s MTW demonstration program gives participating PHAs the flexibility to create innovative housing strategies through their fiscal year 2018. MTW agencies must create activities linked to three statutory purposes—reducing costs, providing incentives for self-sufficiency, and increasing housing choices—and meet five statutory requirements. Congress is considering expanding MTW and has asked GAO to examine what is known about (1) the program’s success in addressing the three purposes, (2) HUD’s monitoring efforts, and (3) the potential benefits of and concerns about expansion. GAO analyzed the most current annual reports for 30 MTW agencies; compared HUD’s monitoring efforts with internal control standards; and interviewed agency officials, researchers, and industry officials. Public housing agencies (PHA) that participate in the Moving to Work (MTW) program report annually on the performance of their activities, which include efforts to reduce administrative costs and encourage residents to work. But this performance information varies, and the Department of Housing and Urban Development’s (HUD) guidance does not specify that it be quantifiable and outcome oriented. Further, HUD has not identified the performance data that would be needed to assess the results of similar MTW activities or the program as a whole and has not established performance indicators for the program. The shortage of such analyses and indicators has hindered comprehensive evaluation efforts, although such evaluations are key to determining the success of any demonstration program. Further, while HUD has identified some lessons learned from the program, it has no systematic process to identify them and thus has relied primarily on ad hoc information. The absence of a systematic process for identifying lessons learned limits HUD’s ability to promote useful practices that could be more broadly implemented to address the purposes of the program. HUD generally follows its MTW monitoring policies and procedures, but they could be strengthened. HUD staff review and approve each MTW agency’s annual plan to ensure that planned activities are linked to program purposes and visit each MTW agency annually to provide technical assistance. But HUD has not taken key monitoring steps set out in internal control standards, such as issuing guidance that defines program terms or assessing compliance with all of the requirements. Without clarifying key terms and establishing a process for assessing compliance with statutory requirements, HUD lacks assurance that agencies are actually complying with the statute. Additionally, HUD has not done an annual assessment of program risks despite its own requirement to do so and has not developed risk-based monitoring procedures. Without taking these steps, HUD lacks assurance that it has identified all risks to the program. Finally, HUD does not have policies or procedures in place to verify the accuracy of key information that agencies self-report. For example, HUD staff do not verify self-reported performance information during their reviews of annual reports or annual site visits. Without verifying at least some information, HUD cannot be sure that self-reported information is accurate. Expanding the MTW program may offer benefits but also raises questions. According to HUD, affordable housing advocates, and MTW agencies, expanding MTW to additional PHAs would allow agencies to develop more activities tailored to local conditions and result in more lessons learned. However, data limitations and monitoring weaknesses raise questions about expansion. HUD recently reported that expansion should occur only if newly admitted PHAs structured their programs to permit high-quality evaluations and ensure that lessons learned could be generalized. Until more complete information on the program’s effectiveness and the extent to which agencies are adhering to program requirements is available, it will be difficult for Congress to know whether an expanded MTW would benefit additional agencies and the residents they serve. Some researchers and MTW agencies suggested alternatives to expansion, including implementing a program that was more limited in scope. GAO makes eight recommendations to HUD: that HUD improve its guidance on reporting performance information, develop a plan for identifying and analyzing standard performance data, establish performance indicators, systematically identify lessons learned, clarify key terms, implement a process for assessing compliance with statutory requirements, do annual assessments of program risks, and verify the accuracy of self-reported data. HUD generally or in part agreed with seven of them. HUD disagreed with our recommendation that it create overall performance indicators. GAO believes, however, that they are critical to demonstrating program results and thus maintains its recommendation.
Although intercollegiate sports may bring to mind nationally televised football and basketball games, 4-year schools’ intercollegiate sports programs vary widely, from small programs involving fewer than 10 teams with expenditures of less than $1 million to large programs with more than 900 student-athletes and expenditures in excess of $50 million. At many schools, intercollegiate athletic competition serves primarily to meet the needs of student-athletes—to give them opportunities to develop their athletic ability as they pursue their courses of study. Schools also view intercollegiate athletics as a means of recruiting prospective students. At schools with large athletic programs, sports serve as an important focal point for students, faculty and staff, alumni, surrounding communities, and the national television audience. Typically, schools with the largest number of athletic programs and facilities belong to Division I-A of the NCAA, and those with smaller programs are members of NCAA Divisions II or III or the second major national collegiate association, NAIA. Most 4-year postsecondary institutions with intercollegiate athletic programs participate in one of these two associations. NCAA, the larger, administers intercollegiate athletics for over 1,000 4-year (baccalaureate degree-granting) schools. Division I member schools are further divided into three categories—I-A, I- AA, and I-AAA—with those that have larger football programs generally placed in I-A and those without football programs in I-AAA. Division I-AA schools sponsor football but are not subject to the spectator attendance requirements that apply to Division I-A schools. In April 2000, NAIA consisted of 330 member institutions. The NAIA does not have divisions except for basketball and football, which each have Divisions I and II. Although no federal monies fund intercollegiate sports programs, federal involvement has arisen in part as a result of civil rights legislation. For example, at schools receiving federal financial assistance, all education programs and activities—including intercollegiate athletic programs—are subject to title IX of the Education Amendments of 1972, which prohibits discrimination on the basis of sex. Federal regulations implementing title IX require that men and women be provided equitable opportunities to participate in intercollegiate athletics, and equitable scholarships, facilities, equipment, supplies, and other benefits. The Department of Education’s Office for Civil Rights assesses schools’ compliance with these requirements. To comply with requirements concerning equitable opportunities to participate in intercollegiate sports, schools must meet any one of the three following criteria, which Education refers to as parts of a three-part test: (1) intercollegiate-level participation opportunities for male and female students are provided in numbers substantially proportionate to their respective enrollments, or (2) the institution can show a history and continuing practice of program expansion that is demonstrably responsive to the developing interests and abilities of the members of the underrepresented gender, or (3) it can be demonstrated that the interests and abilities of the members of the underrepresented gender have been fully and effectively accommodated by the present program. Since the early 1980s, the number of women participating in intercollegiate sports has increased substantially. Although male athletes still outnumber their female counterparts, the growth in their programs has been much smaller and the number of women’s teams now exceeds the number of men’s teams. The percentage of male undergraduates who participate in intercollegiate athletics is essentially the same as it was in 1981–82, while the percentage of women has increased considerably. The rapid growth in women’s participation in intercollegiate sports since 1981–82 has narrowed the gap between genders (see fig. 1). The number of women in intercollegiate sports increased by 81 percent (from 90,000 to 163,000 participants) and the number of men increased by 5 percent (from 220,000 to 232,000 participants) between 1981–82 and 1998–99. The growth in women’s participation was fastest during the early 1980s and in the 1990s. Men’s participation also grew in the early 1980s, but peaked in 1985–86. Since then, it decreased modestly, then fluctuated within a narrow range. The growth in the number of women athletes may reflect, in part, the rapid increase in women’s undergraduate enrollment. From 1981–82 to 1998–99 women’s undergraduate enrollment grew by 30 percent, compared to 6 percent for men. However, women’s participation also grew as an overall percentage of women undergraduates. Women athletes made up 3.9 percent of women undergraduates in 1981–82 and 5.5 percent in 1998–99. In contrast, the portion of undergraduate men participating in athletics remained relatively steady, starting and ending the period at 9.3 percent. The trends in the number of women participants varied by sport. For example, table 1 shows that the biggest increase in the number of women participants—about 18,000—was in soccer. Though participation increased in most sports, five sports reported decreases in participation. The biggest decline occurred in gymnastics, with nearly 700 fewer women gymnasts in 1998–99 than in 1981–82. In men’s sports, increases and decreases were more evenly balanced, with increases in the number of participants in 14 sports and decreases in 12. As shown in table 2, the greatest increase in numbers of participants occurred in football, with about 7,200 more players. Football also had the greatest number of participants—approximately 60,000, or about twice as many as the next largest sport. Wrestling experienced the largest decrease in participation—a drop of more than 2,600 participants. Though the number of male participants was greater than the number of female participants in 1998–99, there were 330 more women’s teams than men’s teams. The average women’s team had fewer athletes than the average men’s team. From 1981–82 to 1998–99, the number of women’s intercollegiate sports teams increased by 66 percent (from 5,695 to 9,479 teams). Most sports saw an increase in the number of teams, with the largest increase occurring in women’s soccer, where the number of teams rose from 80 to 926. The greatest decrease occurred in gymnastics, where the number of teams fell from 190 to 90 (see table 3). Half of men’s sports saw a decline in the number of teams. Two sports had no change and the remaining sports (nearly half) had an increase in the number of teams. As with women’s sports, the largest increase came in soccer (135 new teams). Football, the sport that saw the largest increase in the number of participants, saw a decrease of 37 teams, mainly from NAIA schools. Gymnastics, fencing, and rifle saw the largest percentage decline in the number of teams. The largest decrease in the number of teams was in wrestling (171 teams) (see table 4). About 80 percent of schools added one or more women’s sports teams during the 1992–93 to 1999–2000 period, and more than two-thirds did so without discontinuing any teams. Student interest in a particular sport was often cited as an influence behind many of these decisions. Gender equity considerations also often influenced decisions to add women’s teams and discontinue men’s teams, particularly at schools with large athletic programs. The financial impact of adding or discontinuing teams varied widely by size of program and by sport. Overall, among the 1,191 college and universities responding to the questionnaire, 963 added at least one team and 307 discontinued at least one (see fig. 2). However, of the 948 schools that added one or more women’s teams, 72 percent did so without discontinuing any teams. Only about 16 percent of all respondents neither added nor discontinued a team from 1992–93 through 1999–2000. In total, schools added nearly three times as many women’s teams as men’s teams during this period—1,919 teams for women, compared with 702 for men. They discontinued more than twice as many men’s teams—386 teams for men, 150 for women. Schools with smaller programs were more likely to add men’s teams. Only about 3 percent of the schools with the largest intercollegiate athletic programs (NCAA Division I-A) added one or more men’s teams, compared with 39 percent for NCAA Division III schools and 54 percent of NAIA schools (see table 5). The level of student interest was the factor schools cited most often as greatly or very greatly influencing their most recent decision to add both men’s and women’s teams (see fig. 3). Overall, 52 percent of the respondents that had added a women’s sports team indicated that student interest was a great or very great influence in the decision, and nearly as many schools (47 percent) cited the need to meet gender equity goals or requirements. Other factors cited less frequently when adding a women’s team included the availability of adequate facilities and sufficient equipment, the growth in the number of teams to compete against, community interest, and enough improvement in a club team’s skill to compete at the varsity intercollegiate level. The factors that most influenced recent decisions to add women’s teams varied by the size of a school’s intercollegiate athletic programs. For example, respondents from NCAA Division I-A schools compared to Division III schools more often cited gender equity considerations (82 percent versus 35 percent) and community interest (35 percent versus 12 percent) as a great or very great influence. Division III schools cited the availability of adequate facilities and sufficient equipment (30 percent, compared with 24 percent in Division I-A.) Both types of schools cited student interest about as often (60 percent versus 55 percent). For men’s sports, the pattern of which factors most influenced schools’ decisions to add a team was somewhat different, particularly with regard to gender equity goals or requirements. As was the case for the addition of women’s teams, student interest was the factor cited most often (49 percent) as influencing the addition of men’s teams. After student interest, the factor cited most often was the availability of sufficient facilities and adequate equipment (32 percent), followed by community interest (23 percent). Gender equity considerations, an influential factor for adding a women’s team, was cited by only 4 percent of schools that had added a men’s team. The level of student interest was the most often cited factor in schools’ most recent decisions to discontinue men’s and women’s teams (see fig. 4). Among the 272 responding schools that discontinued a men’s team, 91 (33 percent) cited lack of student interest as a great or very great influence, 83 (31 percent) cited the need to meet gender equity goals or requirements, and 82 (30 percent) cited the need to reallocate budget resources to other sports. Factors affecting decisions to discontinue men’s teams varied with the size of a school’s program. Among schools with large intercollegiate athletic programs, gender equity considerations more often figured as a great or very great influence. At NCAA Division I-A schools, for example, a majority (54 percent) of the respondents discontinuing a men’s team cited gender equity considerations as a great or very great influence. Insufficient student interest in the sport was not often cited; only 6 percent of respondents cited it as a great or very great influence. In contrast, among NCAA Division III respondents, the absence of sufficient student interest in the sport was the most often-cited factor (44 percent cited it as a great or very great influence). The need to reallocate resources to other sports was the next most often-cited factor (cited by 26 percent), followed by decreases in the budget and gender equity considerations (each cited by 21 percent). Decisions to discontinue a women’s team were generally most often driven by the level of student interest. Of the 123 schools that discontinued one or more women’s teams, 58 percent cited the lack of student interest as a great or very great influence. The next most often-cited influences were the team’s inability to compete at the desired level and the absence of adequate facilities and sufficient equipment. The most recent addition of an intercollegiate team increased the average school’s total intercollegiate expenditures by an estimated 6 percent, and the most recent discontinuation of a team reduced expenditures by 4 percent. In general, schools with larger intercollegiate programs experienced smaller percentage changes in their expenditures, as shown in table 6. For example, adding a women’s team at the NCAA Division I-A level increased costs an average of 3 percent, compared to 5 percent for NCAA Division III and 9 percent for NAIA. The comparable averages for recent additions of men’s teams were 2, 8, and 13 percent. The effect of adding or discontinuing a team also varied by sport (see table 7). For example, schools estimated that adding women’s soccer typically increased expenditures by 6 percent, while adding football teams increased expenditures by an average of 31 percent. Discontinuing men’s tennis decreased expenditures by an average of 2 percent, while discontinuing football decreased expenditures an average of 24 percent. The 307 responding schools that discontinued a team during the 1992–93 to 1999–2000 period typically spent 3 months or less between making the proposal to discontinue a team and making a final decision. Most schools informed the campus community of the proposed discontinuation before the decision was final. Once the decision was made to discontinue a team, however, most did not provide a written explanation for their decision. Most schools held meetings to discuss the proposal with groups in the campus community. Schools with larger athletic programs more often included other interested parties, such as alumni or members of booster clubs. Affected athletes usually continued to receive athletic financial aid after the sport was discontinued. Most decisions to discontinue a team were considered and implemented within a few months following the initial proposal, according to the responses from the colleges and universities concerning the team they most recently discontinued during the 1992–93 to 1999–2000 period. The median of the reported amount of time between making such a proposal to reaching a final decision was 2 months. In 38 percent of cases, both the proposal and the final decision came in the same month. In about 5 percent of cases, the schools took more than a year to reach a final decision. The amount of time before the team stopped participating was also brief. For about one-third of the schools, the team had already stopped participating before the final decision to discontinue the sport was made. For another 26 percent, participation stopped during the month the final decision was made. For another 17 percent of the respondents, participation ended by the third month following the final decision. Only about 5 percent allowed sports teams to continue to play for a year or more past the time when a final decision was made. In most cases the proposal to discontinue the team came from within the athletic department, although college administrations were a common source at schools with smaller athletic programs. About 60 percent said the proposal originated with the athletic department. At NCAA Division I-A schools, the figure was 83 percent. For NAIA and NCAA Division I-AAA schools, about one-third of the proposals originated from the school administration. For example, at NAIA schools that discontinued a sport, the athletic department initiated 46 percent of the proposals and school administration initiated 38 percent. Similarly, at NCAA Division I-AAA schools that discontinued a sport, athletic departments initiated 50 percent of the proposals and the school administration initiated another 36 percent. Most colleges and universities (186 of the 307 schools discontinuing a team) informed the campus community of the possibility of discontinuing the team before the decision was final. Large schools, such as those in NCAA Division I-A, were most likely to use a press release to inform the campus community of the possibility of discontinuing the sport. NCAA Division III schools more often provided the information by mail to individuals or used other means such as meetings with athletes and staff. Most of the schools discontinuing a team (64 percent) informed affected athletes of the decision in the month it was finalized. About 20 percent of these schools indicated that they informed the affected athletes of the decision during the 3 months preceding the final decision. About 10 percent of these schools indicated that they informed the affected student athletes of a decision in the months following a final decision. Typically these schools informed the athletes within a month or two. Overall, less than half (41 percent) of schools that discontinued a sports team provided a written explanation. This varied somewhat by the size of schools’ athletic programs. NCAA Division I-A and I-AA schools were least likely to provide a written explanation to affected athletes; about one- quarter of them did so. Members of NCAA Divisions II and III and NAIA were more likely to provide a written explanation; about half did so. More than two-thirds of the schools that discontinued intercollegiate athletic teams did so without allowing an appeal of the decision. The proportion of schools allowing an appeal varied little by size of schools’ athletic programs—from a low of 25 percent among Division I-AA schools to a high of 36 percent among Division I-AAA schools. Several schools described their appeals process as a meeting with the athletic director. Most schools, however, described appeals as meetings with school administrators or organizational units outside the athletics department. For example, schools allowed student-athletes to appeal to the dean of students, athletic council, the school’s president, or the board of trustees. One respondent described an appeal involving an open forum at which all interested parties could speak; others provided opportunities for a written appeal. About 80 percent of the schools (170 of the 212 responding schools that discontinued a team)—aside from Division III schools which are prohibited from providing athletic financial aid—indicated that they allowed their student-athletes to continue receiving aid even though the team was being discontinued. This was most often the case at NCAA Division I schools; continued aid was available at 90 percent of these schools. This was less often the case at Division II schools, where 72 percent of schools indicated that student athletes could continue to receive aid. For about 86 percent of the schools that continued to provide assistance, the aid was available until the athlete graduated. At most of the rest, the aid was made available for up to 1 year. Among all NCAA and NAIA schools discontinuing a team, 86 percent assisted affected athletes in transferring to another institution’s intercollegiate athletics program. However, affected athletes who remained enrolled at the school did not necessarily have the opportunity to compete in that sport at the club level. Only 41 percent gave the affected athletes that opportunity. A majority of the 1,191 school officials reported that they have been able to add one or more teams without discontinuing others. They used a variety of strategies to do so, including obtaining funding from nonschool sources and finding ways to contain costs. The four schools we reviewed in depth used strategies that ranged from fundraising to awarding fewer scholarships. The 693 schools that added one or more intercollegiate athletic teams over the 1992–93 to 1999–2000 period without discontinuing a team did so more often by obtaining additional revenue than by containing costs and reallocating revenue. Sources of funds tended to vary with the size of the intercollegiate athletic program. As shown in table 8, NCAA Division I-A schools tended to rely on revenue from other sports and from outside sources. Schools with smaller programs, particularly those in NCAA Division III and NAIA, most often used additional funds from the institution’s general fund. In some cases, they reallocated existing resources by, for example, trimming travel expenses for all teams and using the savings to help fund the new team. For more detailed information concerning how schools added teams without discontinuing opportunities for athletes on other teams, we visited four colleges and universities to learn how they enhanced their athletic programs. We selected these four because they represented various sizes of schools and athletic programs, and different regions of the country (see table 9). They used combinations of innovative strategies that, as the survey reported, placed greater emphasis on increasing athletic revenue than on cutting costs in other programs. Fundraising strategies included renting out athletic facilities, and cost-containment approaches included trimming administrative expenditures. Though all four schools have unique characteristics, directors from each athletic program articulated factors that were key to facilitating successful program expansion without discontinuing teams. Table 10 lists these factors. All four schools cited the first three factors and two of the four schools cited the last factor. One of the athletic directors acknowledged, however, that a “one size fits all” approach may not be feasible and that these approaches may not apply to other schools. Athletic directors also identified several specific revenue-generating approaches for adding teams without discontinuing others. Donations. The smaller Division I-A school revitalized a business relationship with the chief executive officer of a local private firm. This individual’s prominence, in turn, encouraged financial support from the rest of the business community. Substantial donations from fans and locally based corporations also enabled the school to add new teams and build facilities such as a new football stadium, a sports complex with a softball field, a track, a soccer field, and a planned Olympic-sized pool. Similarly, at the larger Division I-A school, large donations helped the school to add teams and increase the capacity of its football stadium, build a new basketball and ice hockey arena, and upgrade locker facilities. Rental fees. Another revenue-generating strategy was to rent out athletic facilities for other purposes and use the fees to expand the athletic program. For example, the football stadiums or basketball arenas at the Division I-A schools were used to host cultural and entertainment events such as concerts, or to serve as venues for prominent athletic events such as a World Cup soccer match. In addition, the smaller Division I-A school took advantage of its proximity to a prominent venue by letting the public use the football stadium parking lot to accommodate overflow event parking; the annual proceeds of $200,000 were all allocated to the women’s program. At the Division III school, local high school teams rented the football field for practice and special athletic events. In addition to focusing on raising revenue, one athletic director told us that it was important to maintain flexibility in the use of funds available to the athletic department. For example, the larger Division I-A school’s athletic department requires that any earnings in excess of a specified rate of return on endowment funds designated for specific teams be available for general intercollegiate athletic department uses. This gives the athletic director greater flexibility in allocating resources. All four schools we visited also took various steps to reduce current or avoid incurring additional expenditures. These included the following strategies: Recruiting most prospective student-athletes via telephone rather than Denying requests for some teams to be elevated from club to varsity Replacing a retiring full-time faculty member with a coach who also assumed other administrative duties, Limiting the size of the football team roster, Trimming administrative costs, Not awarding the maximum number of scholarships allowed, and Limiting team travel outside the region to one trip every 2 to 3 years to minimize travel expenses. Another cost-containment strategy involved establishing partnerships between the school and the local community. Such partnerships reflected the schools’ ability to capitalize on the unique characteristics of their geographic location. For example, the larger Division I-A school planned to undertake a cost-sharing project with the city and local school district to build a boathouse on a local river that would accommodate rowing teams from the university, high school, and general public. The smaller Division I- A school teamed with a local hospital offering a nationally recognized sports medicine program. Through the arrangement, the hospital provides free services, including a portable medical facility at sports events and physical screenings for each athlete. The Division III school formed a partnership with a locally based professional men’s basketball team. Under the agreement, the team was able to practice at the school’s basketball courts in exchange for funding a new hardwood floor for the courts and renovations to the men’s and women’s locker rooms. We provided a draft of this report to the Department of Education for comment, and it did not provide comments. We are sending copies of this report to the Honorable Roderick R. Paige, Secretary of Education; appropriate congressional committees; representatives of NCAA and NAIA; and other interested parties. Please call me at (202) 512-7215 if you or your staff have any questions about this report. Key contacts and staff acknowledgments for this report are listed in appendix II. As agreed with your offices, we focused our review of intercollegiate athletics on addressing the following questions: How did the number of men’s and women’s intercollegiate sports participants and teams at 4-year colleges and universities change in the 2 decades since the1981–82 school year? How many colleges and universities added and discontinued teams since the 1992–93 school year, and what influenced their most recent decisions to add and discontinue teams? How did colleges and universities make and implement decisions to discontinue intercollegiate sports? When colleges and universities added teams, what types of strategies did they use to avoid discontinuing sports teams or severely reducing their funding? To determine the number of men’s and women’s intercollegiate sports participants and teams, we gathered participation statistics from the two largest 4-year intercollegiate athletic associations—the National Association of Intercollegiate Athletics (NAIA) and the National Collegiate Athletic Association (NCAA). Some schools were members of both associations. For example, of the 787 NCAA members and 515 NAIA members in 1981–82, 117 were dual-membership schools. By 1998–99, NCAA had 1,041 members and NAIA had 339 members, 61 of which were dual members as of April 1999, according to the NCAA. Based on the number of teams and average team sizes, we estimated that these schools accounted for about 3 percent of male and 2 percent of female participants in 1997–98. Because dual-membership schools report their participation statistics to both associations, we counted their statistics only once to avoid double-counting the numbers of teams and participants. The adjusted participation statistics were used to calculate net change in number of teams, number of participants, and participation rates between 1981–82 and 1998–99. To estimate rates of participation, we divided the total estimated number of participants for both associations by the estimated total number of full-time undergraduates enrolled at all 4-year institutions. To the extent that an individual student participated in more than one sport, our calculation of the number of participants may be overstated because these individuals are counted more than once in the statistics. In addition, some 4-year institutions are not members of either NAIA or NCAA, and they were excluded from our analyses. Although we did not verify the accuracy of the statistics provided by the NCAA and NAIA, they are the best available data and are widely used by researchers to study intercollegiate athletic participation. To respond to the other three questions, we developed and administered a questionnaire to gather information from athletic directors at all 4-year colleges and universities that were members of either the NAIA or NCAA. We pretested a draft questionnaire at six schools and subsequently revised it based on their comments. In May 2000, we mailed the final questionnaire to 1,310 institutions including 326 NAIA members and 1,040 NCAA members (both active and provisional members.) This included 56 4-year colleges and universities that were members of both NCAA and NAIA. By October 2000, we had received 1,191 usable questionnaire responses for an overall response rate of 91 percent. In some cases, however, respondents did not respond to all applicable questions. The questionnaire asked athletic directors for the total number of women’s and men’s intercollegiate sports teams added and discontinued during the 1992–93 to 1999–2000 school-year period. When calculating the number of new teams added, we excluded teams that had not yet begun participating in intercollegiate competition by the end of the 1999–2000 school year. Similarly, when calculating the number of teams discontinued, we excluded teams whose last day of intercollegiate competition was after the end of the 1999–2000 school year. We asked each school that added or discontinued a team to respond to additional questions concerning only the most recently added and most recently discontinued men’s and women’s sports teams. We reviewed athletic directors’ questionnaire responses for consistency and in many cases contacted them or their staff to resolve inconsistencies, but we did not otherwise verify the information provided in the questionnaire responses. To identify types of strategies that colleges and universities used to avoid discontinuing sports teams or severely reducing their funding, we used the questionnaire to collect information on how schools paid for new teams. We analyzed these responses for schools that had added some teams without discontinuing others. To get some specific examples of how schools augmented their athletic program without eliminating teams or severely reducing their funding, we visited four selected colleges and universities that were NCAA member schools. We chose these schools in order to achieve variation in a number of characteristics, including geographic diversity, whether the school was public or private, size of the athletic department budget, whether the school awarded athletic scholarships, whether sports were profitable, and whether the school sponsored football. At each school, we interviewed the athletic director and other staff involved in administering the athletic program and toured the athletic facilities. In addition to the individuals named above, Joel I. Grossman, Elsie M. Picyk, Meeta Sharma, Sharon M. Silas, Stanley G. Stenersen, Jason M. Suzaka, and James P. Wright made key contributions to this report. Gender Equity: Men's and Women's Participation in Higher Education (GAO-01-128, Dec. 15, 2000). Interscholastic Athletics: School District Provide Some Assistance to Uninsured Student Athletes (GAO/HEHS-00-148, Sep. 12, 2000). Intercollegiate Athletics: Comparison of Selected Characteristics of Men's and Women's Programs (GAO/HEHS-99-3R, June 18, 1999). Intercollegiate Athletics: Status of Efforts to Promote Gender Equity (GAO/HEHS-97-10, Oct. 25, 1996). Intercollegiate Athletics: Compensation Varies for Selected Personnel in Athletic Departments (GAO/HRD-92-121, Aug. 19, 1992).
The number of women participating in intercollegiate athletics at four-year colleges and universities increased substantially between school years 1981-82 and 1998-99, while the number of men participating increased more modestly. The total number of women's teams increased by 3,784 teams, compared to an increase of 36 men's teams. In all, 963 schools added teams and 307 discontinued teams since 1992-93. The two factors cited most often as greatly influencing the decision to add or discontinue teams were the need to address student interest in particular sports and the need to meet gender equity goals and requirements. Schools that discontinued men's teams also found the need to reallocate the athletic budget to other sports. Colleges and universities that discontinued a team typically took three months or less between originating the proposal and making the final decision. Most schools informed members of the campus community of the possibility that the team would be discontinued, and most held meetings with campus groups before making the final decision. Most schools offered to help affected athletes transfer to other schools, and students receiving athletics-related financial aid continued to receive financial aid for at least some period after the team was disbanded. Schools that were able to add one or more teams without discontinuing others used various strategies to increase athletic program revenue and contain costs. Some schools relied on the institution's general fund, while others used private sources and athletic facility rental fees.
The ISS program began in 1993 with several partner countries: Canada, the 11 member nations of the European Space Agency, Japan, and Russia. From 1994 through 2010, NASA estimates that it directly invested over $48 billion in development and construction of the on-orbit scientific laboratory, the ISS. NASA intended ISS assembly to be complete much sooner than it was. For example, in 1995, NASA expected to ISS assembly to be finished by June 2002, whereas the agency actually completed assembly in 2010. With ISS expected to be in use only through 2015, this slower pace shortened the amount of time NASA had available to take advantage of the significant monetary investment and to fully utilize the station. As a result, the NASA Authorization Act of 2010 required the NASA Administrator to take all actions necessary to ensure the safe and effective operation of the ISS through at least September 30, 2020. The ISS is the largest orbiting man-made object. (See fig. 1) It is composed of about 1 million pounds of hardware, brought to orbit over the course of a decade. The ISS includes (1) primary structures, that is, the external trusses which serve as the backbone of the station and the pressurized modules that are occupied by the ISS crew, and (2) functional systems made up of replaceable units, that is, systems that provide basic functionality such as life support and electrical power that are made of modular components that are replaceable by astronauts on orbit. The ISS was constructed to support three activities: scientific research, technology development, and development of industrial applications. The facilities aboard the ISS allow for ongoing research in microgravity, studies of other aspects of the space environment, tests of new technology, and long-term space operations. The facilities also enable a permanent crew of up to six astronauts to maintain their physical health standards while conducting many different types of research, including experiments in biotechnology, combustion science, fluid physics, and materials science, on behalf of ground-based researchers. Furthermore, the ISS has the capability to support research on materials and other technologies to see how they react in the space environment. NASA planned for the space shuttle to serve as the means of transporting crew, hardware, and supplies to the ISS through the end of the station’s life. However, in 2004, President George W. Bush announced his Vision for Space Exploration (Vision) that included direction for NASA to develop new spaceflight systems under the Constellation program to replace the space shuttle as NASA’s primary spaceflight system. The Vision also included provisions for NASA to pursue commercial alternatives or providing transportation and other services to support the ISS after 2010.NASA established the Commercial Crew and Cargo Program in 2005 to facilitate the private demonstration of safe, reliable, and cost-effective transportation services and purchase these services commercially. When the Constellation program was cancelled in 2010, the commercial vehicles became NASA’s primary focus for providing cargo and crew transportation to the ISS. The success of commercial efforts became even more important in 2010 when Congress authorized the extension of space station operations until at least 2020 from 2015, and the President directed that NASA transition the role of human transportation to low- earth orbit to commercial space companies. The greatest challenge facing NASA is transporting cargo and crew to and from the ISS to make effective use of the ISS. NASA plans to rely on ISS international partner and new commercial launch vehicles to transport cargo and crew to and from the ISS until at least 2020. NASA hopes to begin using new commercial cargo vehicles in 2012 and crew vehicles to transport astronauts to and from the ISS beginning in 2017. NASA’s decision to rely on the new commercial vehicles is inherently risky because the vehicles are still in development and not yet proven or fully operational. NASA is relying on 51 flights of international partner and commercial vehicles to transport cargo to the ISS from 2012 through 2020, but agreements for international flights after 2016 are not in place and the commercial vehicles are unproven. NASA has agreements in place with the European and Japanese space consortiums for their respective vehicles—the European Automated Transfer Vehicle (ATV), and the Japanese H-II Transfer Vehicle (HTV)—to conduct cargo resupply missions beginning in 2012 through 2016. The ATV and HTV are unmanned vehicles that have flown to the ISS, and carry such items as hardware and water.of 12 international partner launches—8 from 2012 to 2016 and 4 from 2017 through 2020. NASA does not have agreements in place for international partners to provide cargo services to the ISS beyond 2016. NASA plans to use the ATV for a number of cargo flights through 2014, but no longer anticipates its use after that time. NASA plans to use HTV for a number of cargo flights through 2016, but its negotiations with the Japanese partners for flights beyond 2016 are in their infancy. NASA’s current plans anticipate employing a total NASA also plans to use two types of domestic commercial launch vehicles to maintain ISS from 2012 through 2020. Development of these vehicles—the Falcon 9 and Antares initiated effort known as Commercial Orbital Transportation Services. These vehicles are being developed by private industry corporations— Falcon 9 by SpaceX and Antares by Orbital Sciences Corporation. In late 2008, NASA awarded contracts to both companies to provide cargo transport services to the ISS. Only SpaceX will be able to safely return significant amounts of cargo to earth, such as the results of scientific experiments. NASA anticipates that SpaceX will begin providing that capability in 2012. —was fostered under a NASA- Commercial vehicles are essential to sustaining and utilizing the ISS. As table 1 indicates, SpaceX and Orbital are scheduled to fly 20 (71 percent) of the 28 launches NASA plans through 2016 and follow-on commercial resupply vehicles are expected to fly 19 (83 percent) of the 23 launches from 2017 through 2020. The Antares was previously known as the Taurus II. This plan relies on commercial vehicles meeting anticipated—not proven—flight rates. As we have previously reported, both SpaceX and Orbital are working under aggressive schedules and have experienced delays in completing demonstrations. SpaceX flew its first demonstration mission in December 2010, some 18 months late, because of such factors as design issues and software development. Currently, SpaceX’s next demonstration launch to the ISS has been delayed from November 2011 to late April 2012 because of additional testing and resolution of some technical issues such as electromagnetic interference. Likewise, Orbital experienced programmatic changes and developmental difficulties that led to multiple delays of several months’ duration. In May 2011 testimony, we noted that Orbital’s inaugural demonstration mission had been delayed to December 2011. Currently, this flight has been delayed further to August or September 2012, primarily because of issues related to construction and testing of the launch pad at Wallops Island, Virginia. NASA has made efforts to accommodate delays in commercial vehicle development, including use of the final shuttle flight in July 2011 to pre-position additional ISS spares. However, if the commercial vehicle launches do not occur as planned in 2012, the ISS could lose some ability to function and sustain research efforts due to a lack of alternative launch vehicles to support the ISS and return scientific experiments back to earth. If the international partner agreements and commercial service provider contracts do not materialize as NASA plans for the years beyond 2016, this could lead to a potential cargo shortfall. As we reported in 2011, NASA’s strategic planning manifests showed that, when anticipated growth in national laboratory demands and margin for unforeseen maintenance needs are accounted for, the 56 flights NASA was planning for at the time of our review might not cover all of NASA’s anticipated needs. These shortfalls amounted to a total of 2.3 metric tons— approximately the cargo that one SpaceX commercial vehicle will be able to transport to the ISS. As of March 2012, NASA has cut its planned number of flights from 2012 through 2020 from the 56 flights we reported to 51 flights. However, its current ongoing analysis is no longer projecting a cargo shortfall even with the decreased number of flights. According to an ISS program official, cargo estimates, particularly beyond 2013, are for planning purposes and could change as they are updated frequently based on launch vehicle availability and the ISS’s need for spares. NASA faces two major challenges in transporting crew to the ISS— adjusting its acquisition strategy for crew vehicles to match available funding and deciding if and when to purchase crew seats on the Russian Soyuz in case domestic commercial crew vehicles are not available as planned in 2017. In 2010, President Obama directed NASA to transition the role of transporting humans to low-Earth orbit to commercial space companies. Consequently, in 2010 and 2011 NASA entered into funded and unfunded Space Act agreementsdevelop and test key technologies and subsystems to further commercial with several companies to development of crew transportation services. NASA’s intent was to encourage private sector innovation and to procure safe, reliable transportation services to the space station at a reasonable price. Under this acquisition approach, NASA plans to procure seats for crew transportation to the ISS from the private sector through at least 2020. In 2011, we reviewed NASA’s plans for contracting for additional commercial crew development efforts and found that the agency’s approach employed several good acquisition practices including competitive contracting that—if implemented effectively—limit the government’s risk. As we also noted in that report, NASA’s funding level for fiscal year 2012 is almost 50 percent less than it anticipated when it developed its approach for procuring commercial crew services. Given this funding level, NASA indicated it could not award contracts to multiple providers, which weakened prospects for competition in subsequent phases of the program. The main premise of its procurement approach to control costs—full and open competition for future phases of the program—therefore was likely no longer viable. Without competition, NASA could become dependent on one contractor for developing and providing launch services to the space station. Reliance on a sole source for any product or service increases the risk that the government will pay more than expected, since no competitors exist to help control market prices. As a result of this funding decrease, NASA adjusted its acquisition strategy. The agency now plans to enter into another round of Space Act agreements to further the development of commercial crew vehicles and has delayed the projected purchase of commercial crew transportation until 2017. Additionally, the agency faces another looming challenge—a decision about if and when to purchase crew space on the Russian Soyuz vehicle. NASA will likely need to decide by the end of 2013 whether to purchase additional seats that might be needed beyond 2016 because the lead time for acquiring additional seats on the Soyuz is 3 years. However, in the 2013 time frame, NASA cannot be fully confident that domestic crew efforts will succeed because the vehicles will not yet have entered the test and integration phase of development. Furthermore, the decision to purchase crew seats on the Russian Soyuz is complicated by restrictions found in the Iran, North Korea, and Syria Nonproliferation Act. These restrictions prohibit NASA from making certain payments to Russia in connection with the ISS unless the President makes a determination. NASA currently has a statutory exemption from this restriction that allows certain types of payments, but that exemption expires in 2016. According to NASA officials, the agency has begun working toward resolution of this problem, but the issue is not yet resolved. NASA’s greatest challenge to utilizing the ISS for its intended purpose— scientific research—is inextricably linked with the agency’s ability to carry scientific experiments and payloads to and from the ISS. International partner vehicles have much less cargo capacity than the space shuttle did to carry supplies to the ISS and no ability to return research payloads back to earth. The Russian Soyuz vehicle has some ability to transport research payloads back to earth, but the capability is minimal at only 132 pounds. As mentioned previously, SpaceX, however, will provide NASA with the capability to transport research payloads back to earth. Consequently, if the new commercial launch vehicles are not available as planned, the impact on ISS utilization could be dramatic. In the past, NASA officials have told us that the impact of failures or significant delays in developing the commercial cargo capability would be similar to the post-Columbia shuttle disaster scenario,in a “survival mode” and moved to a two-person crew, paused assembly activities, and operated the ISS at a lower altitude to relieve propellant burden. NASA officials stated that if the commercial cargo vehicles are delayed, they would pursue a course of “graceful degradation” of the ISS until conditions improve. In such conditions, the ISS would only conduct minimal science experiments. where NASA operated the ISS Nonetheless, NASA expects scientific utilization to increase since construction of the ISS is complete. The ISS has been continuously staffed since 2000 and now has a six-member crew. The primary objective for the ISS through 2011 was construction, so research utilization was not the priority. Some research was conducted as time and resources permitted while the crew on board performed assembly tasks. NASA projects that it will utilize approximately 50 percent of the U.S. ISS research facilities for its own research. As we reported in 2009, however, NASA’s scientific utilization of the ISS is constrained by limited crew time. Limiting factors include the size of the crew on board the station; the necessary division of crew work among many activities that include maintenance, operations, and research; and the need to share research facilities with international partners. Per statutory direction, NASA has opened the remaining facilities to other federal government entities and private industry and is operating the ISS as a national laboratory. As we reported in 2009, NASA may face challenges in the management and operation of ISS National Laboratory research. There is currently no direct analogue to the ISS National Laboratory, and though NASA currently manages research programs at the Jet Propulsion Laboratory and its other centers that it believes possess similar characteristics to other national laboratories, NASA has limited experience managing the type of diverse scientific research and technology demonstration portfolio that the ISS could eventually represent. GAO-10-9. this database when issuing solicitations for funded opportunities to support research payload activities. Since the establishment of CASIS as the management body of ISS research is relatively recent, we have not examined its effectiveness; therefore, it is too early for us to say whether it will be successful in ensuring full scientific utilization of the station as a national laboratory. We recently reported that NASA has an appropriate and reasonable approach in place to determine the spares needed for the ISS as well as to assess ISS structural health and safety. Estimating ISS spares and gauging the structural health and safety of the ISS are not simple challenges. Among the many factors to be assessed are the reliability of key components, NASA’s ability to deliver spares to the ISS, the projected life of structures that cannot be replaced, and in-depth analysis of those components and systems that affect safety. While some empirical data exist, because the ISS is a unique facility in space, assessing its extended life necessarily requires the use of sophisticated analytical techniques and judgments. NASA’s approach to determining necessary spare parts for the ISS relies on a statistical process. The statistical process and methodology being used to determine the expected lifetimes of replacement units is a sound and commonly accepted approach within the risk assessment community that considers both manufacturers’ predictions and the systems’ actual performance. NASA also has a reasonable process for establishing performance goals for various functions necessary for utilization and determining through modeling whether available spares are sufficient to meet goals through 2020, but the rationale for establishing performance goals has not been systematically documented. NASA is also using reasonable analytical tools to assess structural health and determine whether ISS hardware can operate safely through 2020. NASA currently anticipates that—with some mitigation—the ISS will remain structurally sound for continued operations through 2020. NASA also is using reasonable methodologies to identify replacement units and other hardware that could cause serious damage to the ISS if they were to fail. Through 2015, NASA plans to develop methods to mitigate issues identified and expects to begin implementing corrective actions as plans are put in place. In summary, although NASA has done a credible job of ensuring that the ISS can last for years to come, the question that remains is whether NASA will be able to service the station and productively use it for science. Routine launch support is essential to both, but the road ahead depends on successfully overcoming several complex challenges, such as technical success, funding, international agreements, and management and oversight of the national laboratory. Finally, if any of these challenges cannot be overcome, it will be contingent upon NASA to ensure that all alternatives are explored—in a timely manner—to make full use of the nation’s significant investment in ISS. Chairman Hall, Ranking Member Johnson, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For questions about this statement, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Individuals making key contributions to this statement include Shelby S. Oakley, Assistant Director; John Warren, Tana Davis, and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Construction of the International Space Station (ISS) required dedication and effort on the part of many nations to be successful. Further, the funding necessary to accomplish this task was significant, with the United States alone directly investing nearly $50 billion in its development. As construction of the on-orbit laboratory is complete, now is the time for the United States and its partners to make use of this investment and recently, Congress took steps to extend the life of the ISS until at least 2020. GAO has cautioned for years that NASA should ensure it has a capability to access and utilize the space station following retirement of the space shuttle in 2011. We have highlighted the challenges associated with transporting cargo and crew to and from the ISS, as well as the difficulties NASA faces in ensuring the ISS supports its purpose of scientific research and in safely operating the station. Some risks have been realized. For example, commercial vehicles are significantly behind schedule—with the first launch to the space station planned for 2012. GAO's statement today will focus on the progress NASA has made and the challenges the agency faces in accessing, ensuring full utilization of, and sustaining the ISS. To prepare this statement, GAO relied on prior relevant work on the ISS and NASA's commercial cargo and crew efforts and conducted a limited amount of additional work to update planned flight information. NASA plans to use international partner and new domestic commercial launch vehicles to access, utilize, and sustain the International Space Station from 2012 through 2020. However, the agency faces challenges in transporting cargo and crew to the ISS as well as ensuring the station is fully utilized. NASA’s decision to rely on the new commercial vehicles to transport cargo starting in 2012 and to transport crew starting in 2017 is inherently risky because the vehicles are not yet proven and are experiencing delays in development. Further, NASA does not have agreements in place for international partners to provide cargo services to the ISS beyond 2016. The agency will also face a decision regarding the need to purchase additional seats on the Russian Soyuz vehicle beyond 2016, likely before commercial vehicles have made significant progress in development, given the three-year lead time necessary for acquiring a seat. This decision is further complicated because restrictions prohibit NASA from making certain payments to Russia in connection with the ISS unless the President makes a determination. Further, NASA currently expects to transport all cargo needed by the ISS in 51 flights through 2020, but if international partner agreements and commercial service contracts do not materialize as the agency plans for the years beyond 2016, the situation could lead to a potential cargo shortfall. If NASA can access the station, it will next be challenged with fully utilizing the ISS national laboratory for its intended purpose—scientific research. To take steps to meet this challenge and consistent with a 2009 GAO recommendation, in 2011 NASA selected an organization to centrally oversee ISS national laboratory research decision-making. It is too soon, however, to determine whether this organization is ensuring full scientific utilization of the ISS. Regardless of the efforts of the management body, as GAO noted in a 2009 report, constraints on crew time for conducting science could also impact full utilization. If NASA can overcome its challenges related to accessing the station, it has reasonable approaches in place for estimating spare parts and assessing the structural health of the space station. These approaches provide NASA with increased assurance that the agency will have sufficient spares and will put mitigations in place to effectively and safely utilize the space station.
Distance education is not a new concept, but in recent years, it has assumed markedly new forms and greater prominence. Distance education’s older form was the correspondence course—a home study course generally completed by mail. More recently, distance education has increasingly been delivered in electronic forms, such as videoconferencing and the Internet. Some of these newer forms share more features of traditional classroom instruction. For example, students taking a course by videoconference generally participate in an actual class in which they can interact directly with the instructor. Many postsecondary schools have added or expanded electronically-based programs, so that distance education is now relatively common across the entire postsecondary landscape. We estimate that in the 1999-2000 school year, about 1.5 million of the 19 million students involved in postsecondary education took at least one electronically transmitted distance education course. Education reports that an estimated 84 percent of four-year institutions will offer distance education courses in 2002. While newer forms of distance education may incorporate more elements of traditional classroom education than before, they can still differ from a traditional educational experience in many ways. For example, Internet- based distance education, in which coursework is provided through computer hookup, may substitute a computer screen for face-to-face interaction between student and instructor. Chat rooms, bulletin boards, and e-mail become common forms of interaction. Support services, such as counseling, tutoring, and library services, may also be provided without any face-to-face contact. As the largest provider of student financial aid to postsecondary students (an estimated $52 billion in fiscal year 2002), the federal government has a substantial interest in the quality of distance education. Under Title IV of the HEA, the federal government provides grants, work-study wages, and student loans to millions of students each year. For the most part, students taking distance education courses can qualify for this aid in the same way as students taking traditional courses. Differences between distance education and traditional education pose challenges for federal student aid policies and programs. For example, in 1992, the Congress added requirements to the HEA to deal with problems of fraud and abuse at correspondence schools—the primary providers of distance education in the early 1990’s. These requirements placed limitations on the use of federal student aid at these schools due to poor quality programs and high default rates on student loans. Such problems demonstrate why it is important to monitor the outcomes of such forms of course delivery. In monitoring such courses, the federal government has mainly relied on the work of accrediting agencies established specifically for providing outside reviews of an institution’s educational programs. Our analysis of the NPSAS showed that the estimated 1.5 millionpostsecondary students who have taken distance education courses have different demographic characteristics when compared with the characteristics of postsecondary students who did not enroll in distance education. These differences included the following. Distance education students are older. As figure 1 demonstrates, students who took all their courses through distance education tended to be older, on average, when compared to other students. Distance education students are more likely to be married. Figure 2 shows that graduate and undergraduate students that took all of their courses through distance education are more likely to be married than those taking no distance education courses. Undergraduates taking distance education courses are more likely to be female. Women represented about 65 percent of the undergraduate students who took all their courses through distance education. In contrast, they represented about 56 percent of undergraduates who did not take a distance education course. For graduate students, there was no significant difference in the gender of students who took distance education courses and those who did not. Distance education students are more likely to work full-time. As figure 3 shows, a higher percentage of distance education students work full-time when compared to students who did not take any distance education courses. This difference was greatest among graduate students where about 85 percent of the students that took all of their courses through distance education worked full-time compared to 51 percent of students who did not take any distance education courses. Distance education students are more likely to be part-time students. As might be expected, distance education students tend to go to school on a part-time basis. For undergraduates, about 63 percent of the students who took all their courses through distance education were part-time students while about 47 percent of the students who did not take any distance education courses were part-time students. This trend also occurred among graduate students (about 79 percent of those who took their entire program through distance education were part-time students compared with about 54 percent of those who did not take any distance education courses). Distance education students have higher average incomes. Figure 4 shows that in general, graduate students that took distance education courses tended to have higher average incomes than students that did not take any distance education courses. We found similar patterns for undergraduate students. In addition to the demographic characteristics of distance education students, NPSAS provides certain insights on the characteristics of institutions that offer distance education programs. Among other things, it provides data on the modes of delivery that institutions used to provide distance education and the types of institutions that offered distance education. Public institutions enrolled the most distance education students. For undergraduates, public institutions enrolled more distance education students than either private non-profit or proprietary institutions. Of undergraduates who took at least one distance education class, about 85 percent did so at a public institution (about 79 percent of all undergraduates attended public institutions), about 12 percent did so at private non-profit institutions (about 16 percent of all undergraduates attended private non-profit institutions), and about 3 percent did so at proprietary schools (about five percent of all undergraduates attended proprietary schools). For graduate students, public institutions also enrolled more—about 63.5 percent—distance education students than private non-profit or proprietary schools (32 and 4.5 percent, respectively). About 58 percent, 40 percent, and two percent of all graduate students attended public institutions, private non-profit, and proprietary schools, respectively. Institutions used the Internet more than any other mode to deliver distance education. Postsecondary institutions used the Internet more than any other mode to deliver distance education. At the three main types of institutions (public, private non-profit, and proprietary), more than half of the undergraduate students who took at least one distance education course did so over the Internet. Over 58 percent of undergraduate distance education students at public institutions used the Internet and over 70 percent of undergraduate distance education students at private non-profit and proprietary schools also used the Internet. Institutions that offered graduate programs also used the Internet as the primary means of delivering distance education courses. For graduate students who took at least one distance education class, 65 percent of students at public institutions used the Internet, compared with about 69 percent of students at private non-profit institutions, and about 94 percent of students at proprietary institutions. Institutions enrolled the most distance education students in subjects related to business, humanities, and education. For undergraduates, about 21 percent of students who took their entire program through distance education studied business and 13 percent studied courses related to the humanities. This is similar to patterns of students who did not take any distance education classes (about 18 percent studied business and about 15 percent studied humanities). For graduate students, about 24 percent of students who took their entire program through distance education enrolled in courses related to education and about 19 percent studied business. Again, this is similar to patterns of graduate students who did not take any distance education classes (about 23 percent studied education and about 17 percent studied business). Federal student aid is an important consideration for many students who take distance education courses, although not to the same degree as students in more traditional classroom settings. Students who took their entire program through distance education applied for student aid at a lower rate than students who did not take any distance education courses (about 40 percent compared with about 50 percent), and fewer also received federal aid (about 31 percent compared with about 39 percent). Nonetheless, even these lower percentages for distance education represent a substantial federal commitment. A number of issues related to distance education and the federal student aid program have surfaced and will likely receive attention when the Congress considers reauthorization of the HEA or when Education examines regulations related to distance education. Among them are the following: “Fifty percent” rule limits aid to correspondence and telecommunication students in certain circumstances. One limitation in the HEA—called the “50 percent rule”—involves students who attend institutions that provide half or more of their coursework through correspondence or telecommunications classes or who have half or more of their students enrolled in such classes. When institutions exceed the 50 percent threshold, their students become ineligible to receive funds from federal student aid programs. As distance education becomes more widespread, more institutions may lose their eligibility. Our initial work indicates about 20 out of over 6,000 Title IV-eligible institutions may face this problem soon or have already exceeded the 50 percent threshold. Without some relief, the students that attend these institutions may become ineligible for student aid from the federal government in the future. As an example, one institution we visited already offers more than half its courses through distance education; however, it remains eligible for the student aid program because it has received a waiver from Education’s Distance Education Demonstration Program. Without a change in the statute or a continuation of the waiver, more than 900 of its students will not be eligible for student aid from the federal government in the future. To deal with this issue, the House passed the Internet Equity and Education Act of 2001 (H.R. 1992) in October 2001. The House proposal allows a school to obtain a waiver for the 50 percent rule if it (1) is already participating in the federal student loan program, (2) has a default rate of less than 10 percent for each of the last three years for which data are available, and (3) has notified the Secretary of Education of its election to qualify for such an exemption, and has not been notified by the Secretary that such election would pose a significant risk to federal funds and the integrity of Title IV programs. The Senate is considering this proposal. Federal student aid policies treat living expenses differently for some distance education students. Currently, students living off-campus who are enrolled in traditional classes or students enrolled in telecommunications classes at least half-time can receive an annual living allowance for room and board costs of at least $1,500 and $2,500, respectively. Distance learners enrolled in correspondence classes are not allowed the same allowance. Whether to continue to treat these distance education students differently for purposes of federal student aid is an open policy question. Regulations Relating to “Seat” Time. Institutions offering distance education courses that are not tied to standard course lengths such as semesters or quarters have expressed difficulty in interpreting and applying Education’s “seat rules,” which are rules governing how much instructional time must be provided in order for participants to qualify for federal aid. In particular, a rule called the “12-hour rule” has become increasingly difficult to implement. This rule was put in place to curb abuses by schools that would stretch the length of their educational programs without providing any additional instruction time. Schools would do this to maximize the amount of federal aid their students could receive and pass back to the school in the form of tuition and fees. The rule defined each week of instruction in a program that is not a standard course length as 12 hours of instruction, examination, or preparation for examinations. Some distance education courses, particularly self-paced courses, do not necessarily fit this model. Further, the rule also produces significant disparities in the amount of federal aid that students receive for the same amount of academic credit, based simply on whether the program that they are enrolled in uses standard academic terms or not. In August 2002, Education proposed replacing the 12-hour rule with a “one- day rule,” which would require one day of instruction per week for any course. This rule currently applies to standard term courses, and as proposed, it would cover, among other things, nonstandard term courses. Education plans to publish final regulations that would include this change on or before November 1, 2002. Some institutions that might provide nonstandard distance education courses remain concerned, however, because Education has not identified how the “one-day rule” will be interpreted or applied. In considering changes in policy that are less restrictive but that could improve access to higher education, it will be important to recognize that doing so may increase the potential for fraud if adequate management controls are not in place. While our work examining the use of distance education at Minority Serving Institutions (MSIs) is not yet completed, the preliminary data indicate that MSIs—and more specifically, minority students at MSIs— make less use of distance education than students at other schools. NPSAS includes data for a projectable number of students from Historically Black Colleges and Universities and Hispanic Serving Institutions, but it only includes one Tribal College. We plan to send a questionnaire to officials at all three MSI groups to gain a better understanding of their use of distance education technology. In the meantime, however, the available NPSAS data showed the following: Students at Historically Black Colleges and Universities tend to use distance education to a lesser extent than non-MSI students. About 6 percent of undergraduate students at Historically Black Colleges and Universities enrolled in at least one distance education course and about 1.1 percent took their entire program through distance education. These rates are lower than students who took at least one distance education course or their entire program through distance education at non-MSIs. Hispanic students attending Hispanic Serving Institutions use distance education at a lower rate than their overall representation in these schools. About 51 percent of the undergraduates at Hispanic Serving Institutions are Hispanic, but they comprise only about 40 percent of the undergraduate students enrolled in distance education classes. This difference is statistically significant. Similarly, our analysis also shows that the greater the percentage of Hispanic students at the institution, the lower the overall rate of distance education use at that school. Since NPSAS includes data from only one Tribal College, we were unable to develop data on the extent that Tribal College students use distance education. However, our visits to several Tribal Colleges provide some preliminary insights. Our work shows that distance education may be a viable supplement to classroom education at many Tribal Colleges for a number of reasons. Potential students of many Tribal Colleges live in communities dispersed over large geographic areas—in some cases potential students might live over a hundred miles from the nearest Tribal College or satellite campus—making it difficult or impossible for some students to commute to these schools. In this case, distance education is an appealing way to deliver college courses to remote locations. Additionally, officials at one Tribal College told us that some residents of reservations may be place-bound due to tribal and familial responsibilities; distance education would be one of the few realistic postsecondary education options for this population. Also important, according to officials from some Tribal Colleges we visited, tribal residents have expressed an interest in enrolling in distance education courses. The HEA focuses on accreditation—a task undertaken by outside agencies—as the main tool for ensuring quality in postsecondary programs, including those offered through distance education. The effectiveness of these accreditation reviews, as well as Education’s monitoring of the accreditation process, remains an important issue. To be eligible for federal funds, a postsecondary institution or program must be accredited by an agency recognized by Education as a reliable authority on quality. Education recognizes 58 separate accrediting agencies for this purpose, of which only 38 are recognized for Title IV student aid purposes. The 58 accrediting agencies operate either regionally or nationally, and they accredit a wide variety of institutions or programs, including public and private, non-profit two-year or four-year colleges and universities; graduate and professional programs; proprietary vocational and technical training programs; and non-degree training programs. Some accrediting agencies accredit entire institutions and some accredit specialized programs, departments, or schools that operate within an institution or as single purpose, freestanding institutions. The HEA and regulations issued by Education establish criteria under which Education will recognize an accreditation agency as a reliable authority regarding the quality of education. The HEA states that accrediting agencies must assess quality in 10 different areas, such as curriculum, student achievement, and program length. Under the HEA, an accrediting agency is required to include distance education programs when assessing quality. In doing so, an accrediting agency must consistently apply and enforce its standards with respect to distance education programs as well as other educational programs at the institution. Our analysis in this area is not as far along as it is for the other topics we are discussing today. We plan to review a number of accreditation efforts to determine the way in which accrediting agencies review distance education programs. We expect that our work will address the following issues: How well accrediting agencies are carrying out their responsibilities for reviewing distance education. The HEA does not contain specific language setting forth how distance learning should be reviewed. Instead, it identifies key areas that accrediting agencies should cover, including student achievement and outcomes, and it relies on accrediting agencies to develop their own standards for how they will review distance education programs. We will look at how accrediting agencies are reviewing distance education programs and the standards that are being used. How well Education is carrying out its responsibilities and whether improvements are needed in Education’s policies and procedures for overseeing accrediting agencies. Under the HEA, Education has authority to recognize those agencies it considers to be reliable authorities on the quality of education or training provided. Accrediting agencies have an incentive to seek Education’s recognition because without it, students at the institutions they accredit would not be eligible to participate in federal aid programs. We will conduct work to identify what improvements, if any, are needed in Education’s oversight of accrediting agencies. In closing, distance education has grown rapidly over the past few years and our work indicates that distance learning might present new educational opportunities for students. Congress and the Administration need to ensure that changes to the HEA and regulations do not increase the chances of fraud, waste, or abuse to the student financial aid programs. At the request of this Committee, and members of the House Committee on Education and the Workforce, we will continue our study of the issues that we have discussed today. Mr. Chairman, this concludes my testimony. I will be happy to respond to any questions you or other members of the Committee may have.
Increasingly, the issues of distance education and federal student aid intersect. About one in every 13 postsecondary students enrolls in at least one distance education course, and the Department of Education estimates that the number of students involved in distance education has tripled in just 4 years. As the largest provider of financial aid to postsecondary students, the federal government has a considerable interest in distance education. Overall, 1.5 million out of 19 million postsecondary students took at least one distance education course in the 1999-2000 school year. The distance education students differ from other postsecondary students in a number of respects. Compared to other students, they tend to be older and are more likely to be employed full-time while attending school part-time. They also have higher incomes and are more likely to be married. Many students enrolled in distance education courses participate in federal student aid programs. As distance education continues to grow, several major aspects of federal laws, rules, and regulations may need to be reexamined. Certain rules may need to be modified if a small, but growing, number of schools are to remain eligible for student aid. Students attending these schools may become ineligible for student aid because their distance education programs are growing and may exceed statutory and regulatory limits on the amount of distance education an institution can offer. In general, students at minority serving institutions use distance education less extensively than students at other schools. Accrediting agencies play an important role in reviewing distance education programs. They, and Education, are "gatekeepers" with respect to ensuring quality at postsecondary institutions--including those that offer distance education programs.
VA operates a national health care system that provides health care services to over 5 million patients annually. As part of that system, VA provides mental health services to veterans in inpatient and outpatient settings in a variety of VA health care facilities, including medical centers, CBOCs, and Vet Centers. Veterans receiving these services include homeless veterans, veterans with serious mental illness, and veterans returning from combat who are dealing with postdeployment readjustment issues. Mental health services are provided for a range of conditions such as depression, PTSD, and substance abuse disorders. VA’s Under Secretary for Health heads VA health care programs and is responsible for oversight of operations in VA’s 21 health care networks, which are structured to manage and allocate resources to more than 150 VA medical centers. Mental health services are provided on an inpatient and outpatient basis in medical centers and may also be provided on an outpatient basis in CBOCs, which are associated with medical centers. Within VA, the lead mental health expert is the Deputy Chief Patient Care Services Officer for Mental Health. This position does not have direct authority for operations, but instead serves as an advisor to VA networks and medical centers on mental health services. In addition, the official in this position is responsible for oversight of the Office of Mental Health Services (OMHS) located at VA headquarters. OMHS includes various clinical experts who provide consultation on mental health services, including PTSD and substance abuse, to VA program officials in the networks and medical centers. VA headquarters allocates most of its medical program services budget each year through a general resource allocation system to its 21 health care networks. This system, the Veterans Equitable Resource Allocation (VERA) system, uses a case-mix formula to allocate funding to the networks, which in turn allocate funding to their medical centers. Although the VERA system is used to allocate funds, it does not designate funds for specific purposes or prescribe how those funds are to be used. Medical centers also receive funding for specific purposes, such as prosthetics, from VA headquarters that is not allocated through the VERA system. In addition, VA medical center resources include collections from insurance reimbursements, copayments, and deductibles for the care of some veterans. In April 2002, President Bush established the President’s New Freedom Commission on Mental Health and directed the Commission to identify policies that could be implemented by federal, state, and local governments to improve the delivery of mental health care across the country. In July 2003, the Commission released its final report and recommendations for improving the American mental health care system. After release of the report, VA’s Under Secretary for Health formed a work group of mental health and health care professionals charged with reviewing the Commission’s recommendations to determine if those recommendations were relevant to VA’s mental health program. Following that effort, in July 2004, VA completed its mental health strategic plan for improving the delivery of mental health services within its health care system. This plan was formally approved by the Secretary of VA in November 2004. The mental health strategic plan contained recommended initiatives for improving VA mental health services by addressing a range of issues, including, for example, improving awareness about mental illness and improving access to mental health services. According to VA officials, the mental health strategic plan was designed to address gaps in mental health services provided to veterans across the country. Some of the service gaps identified by the VA were in treating veterans with serious mental illness, female veterans, and veterans returning from combat in Iraq and Afghanistan. The implementation of the mental health strategic plan sought to ensure, for example, that mental health services are provided in community-based outpatient settings; that veterans have consistent access to mental health services across the country; and that acute inpatient mental health services are coordinated with other inpatient services provided to veterans. Within VA, OMHS is responsible for coordinating with the networks and medical centers on the overall implementation of the mental health strategic plan. This includes formulating strategies for allocating funds committed for the plan’s implementation. Such strategies include, for example, the use of RFPs solicited from networks for specific initiatives to be carried out at their individual medical centers. In addition to making these funding decisions, OMHS is also responsible for tracking the use of funds allocated for implementing the mental health strategic plan. While VA initially attempted to develop an estimate of the cost to fully implement the mental health strategic plan, VA has since decided that a comprehensive cost estimate is inappropriate. According to VA, a full- implementation cost estimate is inappropriate because the plan is a “living document” that will continue to change over time as it is implemented, and thus, the costs will change as well. VA, working with an actuarial firm that used certain assumptions provided by VA, developed both a long-term and a shorter-term “unofficial” estimate of implementation costs for the initiatives included in the plan because VA wished to have a “rough estimate” of what might be entailed in providing all services that might be needed if capacity were not a constraint, according to VA officials. VA and the actuary it used concluded, however, that the methodology used to develop these estimates was problematic. For example, the estimates used incorrect projections for utilization of mental health services, in part, because VA’s population and mental health services are different from those in the private sector. VA officials said that more current and accurate data are becoming available for use in projecting the number of OIF and OEF veterans who would be entering the system and need such services, and that such data and improvements in projecting demand were used in development of the President’s budget request for fiscal years 2006 and 2007. VA headquarters allocated about $88 million of the $100 million that VA officials said would be allocated for VA mental health strategic plan initiatives in fiscal year 2005 by using several approaches. About $53 million was allocated directly to medical centers and certain offices and $35 million was allocated through VA’s general resource allocation system to its health care networks, according to VA officials. The approximately $12 million remaining of the $100 million was not allocated by any approach, headquarters officials said, because there was not enough time during the fiscal year to allocate the funds. Officials we interviewed at seven medical centers in four networks reported using allocated funds to provide new mental health services and to provide more of existing services. However, some medical center officials reported that they did not use all allocated funds for plan initiatives by the end of fiscal year, due in part to the length of time it took to hire new staff. VA headquarters allocated about $53 million directly to medical centers and certain offices based on proposals submitted for funding and other approaches targeted to specific initiatives related to the mental health strategic plan in fiscal year 2005. (See table 1.) VA headquarters developed RFPs and solicited submissions from networks for specific initiatives to be carried out at their individual medical centers through these RFPs. VA allocated resources through this and other targeted approaches to support a range of mental health services, based, in part, on the priorities of VA leadership and legislation for programs related to PTSD, substance abuse, and other mental health areas, according to VA headquarters officials. VA headquarters officials told us that the Secretary of VA had identified several areas of the mental health strategic plan that were to be priorities for implementation, including those related to substance abuse, PTSD, services for veterans of OIF/OEF, mental health in CBOCs, and homelessness. Nearly $20 million of the approximately $53 million allocated by using RFPs and other targeted approaches was for mental health services related to legislation that expressly required spending or authorized such services, according to VA officials. In addition, nearly $33 million was allocated for mental health services not directly related to such legislation. Most of the approximately $53 million allocated—about $48 million—went to VA medical centers. PTSD services and OEF/OIF veterans’ mental health care received combined allocations of about $18 million. In addition, combined allocations for Compensated Work Therapy (CWT) totaled nearly $10 million. Other initiatives receiving funding included substance abuse services, domiciliary expansion, and psychosocial rehabilitation for veterans with serious mental illness. In addition, VA allocated $4 million that was initially planned for CWT programs to VA’s Office of Geriatrics and Extended Care to support development of a new nursing home care model. This shift occurred toward the end of the fiscal year, when it appeared that not all mental health strategic plan funding would be allocated that year. VA officials noted that the nursing home model was aligned with initiatives in the mental health strategic plan related to the needs of veterans in long-term care settings. The remaining funds—$600,000—were allocated to VA’s Employee Education System to develop educational programs. VA headquarters officials issued five RFPs from October 2004 to January 2005 that described the specific types of services for which mental health strategic plan funding was available. The RFPs related to PTSD, veterans of OIF and OEF, substance abuse, and psychosocial rehabilitation services were issued in October 2004; the domiciliary RFP was issued in January 2005. All of the RFPs noted that funding would be provided to address unmet needs or gaps in services. Review panels headed by mental health experts within VA reviewed the proposals submitted by networks, ranked them, and provided their rankings to VA’s leadership who made the allocation decisions. VA then allocated funding directly to medical centers for the mental health strategic plan initiatives beginning in February 2005 and continuing throughout fiscal year 2005. In addition to RFPs, VA also used other approaches targeted to specific initiatives based on identified needs. For example, VA headquarters officials used a targeted approach to allocate funding to medical centers to expand mental health services at CBOCs that had fewer mental health visits than a standard that VA identified for this purpose. In addition, VA headquarters allocated funds to support the creation of CWT-supported employment mentor sites in each network. The medical centers selected at those sites were expected to provide training and support for existing and future CWT programs aimed at helping veterans with serious mental illness find and maintain employment. VA headquarters also used targeted funding approaches to allocate funds to medical centers to enhance existing CWT programs through the addition of new staff and to establish CWT programs at medical centers without such programs. VA headquarters used targeted approaches to allocate funding for new and expanded mental health intensive case management teams; grant and per diem liaisons for homeless veterans; and PTSD, OIF and OEF veterans’, and substance abuse services. VA headquarters officials said that allocations made for initiatives in fiscal year 2005 through RFPs and other approaches targeted to specific initiatives would be made for a total of 2 to 3 fiscal years. These officials said they anticipated that medical centers would hire permanent staff whose positions would need to be funded for more than 1 year. The expectation of VA leadership was that after funds allocated through these approaches were no longer available, medical centers would continue to support these programs using their general operating funds received through VA’s general resource allocation system. VA allocated $35 million for mental health strategic plan initiatives in fiscal year 2005 through its general resource allocation system to its health care networks, according to VA headquarters officials. The decision to allocate these resources to VA’s networks for mental health strategic plan initiatives was retrospective and VA did not notify networks and medical centers of this decision. Although VA headquarters made fiscal year 2005 general resource allocations to the networks in December 2004, the decision that $35 million in funds allocated at that time was for mental health strategic plan initiatives was not finalized until April 2005, several months after the general allocation had been made. VA headquarters officials said that they made the decision to allocate $35 million from the general resource allocation system because these resources would be more rapidly allocated than if they had been allocated through RFPs. However, other VA headquarters officials told us that the decision was also made, in part, because VA did not have sufficient unallocated funds remaining after the December 2004 general allocation to fund $100 million for the mental health strategic plan through RFPs and other targeted approaches. VA headquarters officials, as well as network and medical center officials, indicated that there was no guidance to the networks and medical centers instructing them to use specific amounts from their general fiscal year allocation for mental health strategic plan initiatives. Network and medical center officials we spoke with in four networks were unaware that any specific portion of their general allocation was intended by headquarters officials to be used for mental health strategic plan initiatives. Several VA medical center officials noted, however, that some of the funds in their general allocation were used to support mental health programs generally, as part of their routine operations. However, because network and medical center officials we interviewed did not know that funds had been allocated for mental health strategic plan initiatives through VA’s general resource allocation system, nor did VA headquarters notify networks and medical centers throughout VA of this retrospective allocation, it is likely that some of these funds were not used for plan initiatives. VA did not allocate approximately $12 million remaining of the $100 million planned for mental health strategic plan initiatives in fiscal year 2005 because, according to VA headquarters officials, there was not enough time during the fiscal year to allocate the funds through the RFP process or other approaches targeted to specific initiatives. In addition, officials said that when resources were allocated later in the fiscal year through an RFP, rather than at the beginning of the year, the amount allocated was only a portion of the annualized cost. For example, if funds for a project with an annual cost of $4 million were allocated midway through the fiscal year, only half the annual cost was allocated at that time—$2 million. The expectation was that the full $4 million would be made available for the project over the 12 months in the next fiscal year. The approximately $12 million in unallocated funds in fiscal year 2005 was intended for mental health strategic plan initiatives based on an allocation plan developed by VA. (See table 2.) About $11 million of the resources not allocated was for services related to legislation that expressly required spending or authorized such services, according to VA officials. VA headquarters officials said that the funds not allocated for mental health strategic plan initiatives were allocated for other health care services. Officials we interviewed from seven medical centers in four networks reported using the funds allocated to them for mental health strategic plan initiatives through RFPs and other targeted approaches, but some officials said that some of these funds were not used for plan initiatives in fiscal year 2005. Officials said they used funds allocated to provide new mental health services and to provide more of existing mental health services included in plan initiatives. For example, officials at medical centers in Bay Pines and the Tennessee Valley Healthcare System reported using funds to increase the number of mental health providers at CBOCs, some of which previously had no mental health providers available to see veterans. The Albuquerque medical center used funds to develop a CWT- supported employment program to help veterans with mental health diagnoses develop job skills and find employment. The Tennessee Valley Healthcare System also implemented a new 6-week PTSD day treatment program in which veterans live in the community but come to the medical center during the day for counseling, group therapy, and other services. The Tampa medical center funded new mental health staff to work with veterans being treated in its Polytrauma Rehabilitation Center. The Tuscaloosa medical center opened a new domiciliary for homeless veterans and the Phoenix medical center hired a new grant and per diem liaison for its homeless program. The medical centers in our review used the mental health strategic plan funds for recurring uses, such as hiring staff, and for nonrecurring uses. Nonrecurring uses included acquisition of furniture and equipment as well as building renovation. Officials at four medical centers reported that they were not able to use all of their fiscal year 2005 funding by the end of the fiscal year as planned and cited several factors that contributed to this situation. The length of time it takes to recruit new staff in general and the special problems of hiring specialized staff such as psychiatrists were cited. Officials at two medical centers noted that they received funding for multiple new positions, but that it was difficult for the medical center to recruit and hire for so many positions in a relatively short period of time. In addition, in some cases the need to locate or renovate space for mental health programs contributed to delays in using funds. For example, officials at the Albuquerque medical center reported that although it received funding for staff for a new residential program, it took some time to renovate the space needed for that program, which limited the amount of funding for staff they were actually able to spend in fiscal year 2005. Medical centers varied in how they treated fiscal year 2005 funds that were allocated by VA for mental health strategic plan initiatives but not used for those initiatives. Officials at three medical centers reported that they carried over the funds for use in the next fiscal year. For example, officials at the Phoenix medical center reported carrying over unused funding for a substance abuse residential rehabilitation program. Officials at two medical centers reported that they used these funds for other health care purposes. For example, officials at the Albuquerque medical center said that funding that was not used for staffing due to difficulties with hiring was made available to meet other needs in the medical center for that fiscal year. Officials at another medical center, the Tennessee Valley Health Care System, reported having unused fiscal year 2005 funding due to difficulties with hiring, and using this funding to support other mental health programs, in particular to hire mental health staff for its CBOCs. VA headquarters officials advised participants from networks and medical centers in a weekly conference call in August 2005 that if they were unable to hire staff for initiatives in fiscal year 2005, they should use the funds allocated only for mental health services. VA headquarters allocated about $158 million of the $200 million to be used for VA mental health strategic plan initiatives in fiscal year 2006 directly to medical centers and certain offices by using several approaches. About $92 million of these funds was allocated to support new mental health strategic plan initiatives for fiscal year 2006. VA also allocated about $66 million to support the recurring costs of the continuing mental health strategic plan initiatives that were funded in fiscal year 2005. The remaining approximately $42 million was not allocated. Officials at some medical centers expected to use all the allocations they received during fiscal year 2006. However, officials at other medical centers were uncertain that they would use all their allocated funds for plan initiatives during the fiscal year. VA headquarters allocated about $158 million directly to medical centers and certain offices through RFPs and other approaches targeted to specific initiatives related to the mental health strategic plan in fiscal year 2006. (See table 3.) About $92 million was for new mental health strategic plan activities, and about $66 million was to support the recurring costs of continuing mental health strategic plan initiatives that were first funded in fiscal year 2005. As in fiscal year 2005, the new resources went to support a range of mental health services in line with priorities of VA’s leadership and legislation, according to VA officials. Funding for services for PTSD, OIF and OEF veterans, substance abuse, and CBOC mental health services accounted for nearly three-fifths of the funds allocated for new initiatives. VA did not allocate resources in fiscal year 2006 for mental health strategic plan initiatives through its general resource allocation system, according to VA officials. VA headquarters officials used RFPs and other approaches targeted to specific initiatives to determine which medical centers would receive funding for new mental health strategic plan initiatives in fiscal year 2006. In November 2005, for example, VA issued an RFP that covered six mental health areas: PTSD services, including residential services; health promotion and preventive care services for veterans returning from OEF and OIF; specialized substance abuse treatment programs; new mental health residential rehabilitation and treatment programs; enhanced or new CBOC mental health services; and new telemental health programs to provide mental health services through videoconferencing. VA also used other approaches to target funds to medical centers for grant and per diem program liaisons, new or expanded mental health intensive case management teams, and expanded inpatient services at the Tennessee Valley Healthcare System medical center. Further, VA allocated funding for medical supplies, equipment, and office furniture for Gulf Coast mental health programs affected by Hurricane Katrina. As in fiscal year 2005, VA allocated funding to the Employee Education System to support educational programs. VA also allocated funding to support additional mental health initiatives such as the development of web-based support tools for veterans with mental health concerns, infrastructure improvements at residential rehabilitation treatment facilities, suicide prevention efforts, and Stand Down events to provide services such as counseling and health screenings for homeless veterans. VA did not allocate about $42 million of the $200 million planned for mental health strategic plan initiatives in fiscal year 2006 by any approach. The approximately $42 million in unallocated funds were intended for certain mental health strategic plan initiatives based on an allocation plan developed by VA. According to VA officials, VA was unable to allocate all the $200 million, in part, because of the delayed implementation of three new Centers of Excellence, focusing on veterans’ mental health issues, including PTSD, for which VA planned allocations totaling $4.5 million. VA officials also cited the unanticipated length of time required to refine the processes for implementation of initiatives related to the provision of mental health services in primary care settings. VA had solicited proposals related to primary care mental health services through a May 2006 RFP and had anticipated allocating about $11 million for such services from funds reserved for emerging needs related to the mental health strategic plan. In addition, VA officials reported that a portion of the funds were unallocated for reasons related to the timing of allocations that were made for plan initiatives through RFPs and other funds targeted to medical centers. Specifically, some of these allocations were made well into the fiscal year. VA allocated only the amount of funds through these approaches for fiscal year 2006 that would fund the projects through the end of the fiscal year, and not the full 12-month costs, which VA expects to fund in fiscal year 2007. VA officials said they anticipated that the full 12-month allocation would be available for these projects in fiscal year 2007. Most of the unallocated funds had been planned for initiatives to provide services that VA identified as not directly in response to legislation that expressly required spending or authorized such services. (See table 4.) Officials at seven medical centers we interviewed in May and June of 2006 reported using funds allocated to them through RFPs and other approaches to support new 2006 initiatives and to continue to support initiatives funded in fiscal year 2005. Officials at four of these medical centers told us that they were using these funds to support expanded mental health services. For example, officials at several medical centers, including Bay Pines, Decatur, and the Tennessee Valley Healthcare System, reported using fiscal year 2006 funding to expand mental health services in their CBOCs by adding clinical staff. As part of this expansion of services, the Tampa medical center used funding for a new mental health intensive case management program. Five medical centers had received funding for expanded mental health services, but had not yet used all of the allocated funds. The Albuquerque medical center, for example, had received funding for a new substance abuse program for geriatric patients and a new case management program for veterans with PTSD. As of May 2006, both programs were still being developed and positions had been advertised but had not yet been filled. Officials at two medical centers reported that they did not anticipate problems using all of the funds they had received in fiscal year 2006. However, officials at four other medical centers were less certain they would be able to use all of the funds. Officials at two of these medical centers were not sure whether they would be able to hire all of their new staff by the end of the fiscal year. In addition, officials at the Bay Pines and Phoenix medical centers noted that they had not yet learned whether proposals they submitted in response to fiscal year 2006 RFPs would be funded; as a result, officials at those medical centers were uncertain whether they would be able to use all of their fiscal year 2006 funds for plan initiatives by the end of the fiscal year. VA tracking of spending for mental health strategic plan initiatives was inadequate for fiscal years 2005 and 2006. In fiscal year 2005, VA headquarters did not track spending on mental health strategic plan initiatives. In fiscal year 2006, VA began to track some information on medical centers’ mental health strategic plan initiatives, but did not track the amount of allocated funds that was spent for them. VA headquarters officials used this newly instituted tracking system to gather implementation information reported by networks and medical centers on a quarterly basis. The tracked information was primarily related to positions to be filled, the schedule for filling them, and when they were filled. Headquarters officials said that this tracking was intended, in part, to measure medical centers’ progress in implementing plan initiatives. Officials told us that they believe that tracking of hiring provides information on how funds were spent because most costs of initiatives are personnel costs. However, the data on hiring did not include information on the individual salaries of staff, associated benefits, the portion of the fiscal year for which staff are employed, equipment, supplies, rent, or renovation of facilities. As a result the quarterly reports do not allow VA to determine how much was spent on plan initiatives. In fiscal year 2006, VA headquarters officials compiled information on the amount of funds returned to headquarters that medical centers could not spend during the fiscal year. However, VA does not have information on whether the funds medical centers retained were spent for plan initiatives. Available information indicates that spending of allocations for mental health strategic plan initiatives was substantially less than planned in both fiscal years 2005 and 2006. In fiscal year 2005, approximately $12 million of the planned $100 million for plan initiatives was not allocated for plan initiatives and thus was not spent on them. Thirty-five million dollars was allocated through VA’s general resource allocation system, but because VA headquarters did not specify that these funds were for mental health strategic plan initiatives, it is likely that portions of this money were not spent on them, and VA officials said they do not have information on these funds being spent for plan initiatives. In addition, VA officials told us that they did not have information on the extent to which the approximately $53 million in funds that were allocated directly to medical centers and certain offices was actually spent on plan initiatives. Officials at medical centers we interviewed told us that they used some of these funds on mental health activities other than the planned initiatives or carried over funds until the next fiscal year. In fiscal year 2006, available information indicates that the maximum amount of allocated funds that could have been spent for plan initiatives in fiscal year 2006 also fell substantially below what was planned. About $42 million of the $200 million that was planned for allocation to mental health strategic plan initiatives was never allocated for them, and thus, never spent for plan initiatives. Additionally, about $46 million of the approximately $158 million that was allocated was returned by medical centers to headquarters because it had not been spent for plan initiatives before the end of the fiscal year. However, all of the remaining approximately $112 million of funds allocated to and retained by medical centers and offices was not necessarily spent on plan initiatives as originally planned. VA officials provided written guidance to medical centers in August 2006 instructing them to spend funds for other mental health activities if they could not spend them for the planned initiatives before the end of the fiscal year. VA officials told us that because they had provided instructions to spend the funds on mental health activities that such activities would constitute spending on mental health strategic plan activities. VA’s guidance, however, did not specify that funds be used for the plan initiatives or alternative initiatives. Moreover, VA did not track specifically how these funds were spent. As a consequence, VA cannot determine how much of the approximately $112 million that was allocated for plan initiatives and not returned to headquarters was spent on plan initiatives. VA allocated additional resources for mental health strategic plan initiatives in fiscal years 2005 and 2006 to help address identified gaps in VA’s mental health services for veterans. The allocations that were made resulted in some new and expanded mental health services for plan initiatives according to officials at selected medical centers. However, in fiscal year 2005, lack of adequate time for headquarters to allocate funds for plan initiatives to medical centers, late-in-the-year allocations that hampered medical center efforts to bring staff on board during the fiscal year, and a lack of guidance concerning allocations for plan initiatives made through VA’s general resource allocation system resulted in spending on initiatives falling short of what was planned. In fiscal year 2006, a larger amount, approximately $158 million of the planned $200 million for plan initiatives, was allocated to medical centers and other offices than in fiscal year 2005. However, at the end of the fiscal year about $46 million was returned to VA headquarters that had not been spent on mental health strategic plan initiatives, and some funds that remained with medical centers and other offices may have been directed towards mental health activities other than plan initiatives. Although available information shows that a substantial portion of the resources intended for plan initiatives in fiscal years 2005 and 2006 was not spent on these initiatives, VA does not know the amount of allocated funds actually spent on them. The extent of spending is unknown because VA did not track spending of these funds. Although some tracking of mental health strategic plan initiatives was started in fiscal year 2006, data were not collected that would allow an assessment of spending. Tracking the extent to which allocations for plan initiatives are spent for these initiatives is important as VA continues to allocate resources for future plan initiatives. This would help to ensure that the money is being spent as planned, and that VA is in fact addressing gaps that it has identified in mental health services for veterans. To provide information for improved management and oversight of how funds VA allocates are spent to fill identified gaps in mental health services for veterans, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following action: Track the extent to which the resources allocated for mental health strategic plan initiatives are spent for plan initiatives. VA did not provide agency comments on the contents of this report. We offered VA the opportunity to review and comment on the report, but not retain copies of the draft as part of a process to help safeguard the contents from unauthorized disclosure. VA in a written response (reproduced in app. III) said that it was unable to provide comments on the draft report because VA was not provided a copy of the report for appropriate staffing to include review and analysis. VA further stated that while it respected our desire to maintain the integrity of GAO draft reports by preventing improper disclosure of draft contents, that this did not outweigh the need for VA staff to have a copy of the draft report for review and analysis. We have provided similar report review opportunities to other agencies for other reports, and have received agency comments in those circumstances. We met with VA officials on November 14, 2006, and provided them with an oral briefing covering the contents of the draft report. Further, a portion of the contents of this report had previously been released as a statement for the record at a hearing held by the House Veterans’ Affairs Committee, Subcommittee on Health, on September 28, 2006. We discussed the information in that statement with VA officials who have responsibilities related to mental health services, budgeting, and the allocation of financial resources, and they agreed that the data in the statement were accurate. As a result, VA is aware of the report’s contents. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7101 or ekstrandl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Medical centers and other facilities Decatur, Ga., medical center Tuscaloosa, Ala., medical center Birmingham, Ala., Vet Center Bay Pines, Fla., medical center Tampa, Fla., medical center Dunedin, Fla., community-based outpatient clinic (CBOC) Community-based outpatient clinics (CBOC) CBOCs provide medical services, which can include mental health, on an outpatient basis in a community setting. CBOCs are affiliated with a VA medical center. CWT is a therapeutic work-for-pay program that (1) uses remunerative work to maximize a veteran’s level of functioning, (2) prepares veterans for successful reentry into the community, and (3) provides a structured daily activity to those veterans with severe and chronic disabling physical and/or mental conditions. Domiciliary residential rehabilitation and treatment programs for homeless veterans, providing coordinated, integrated rehabilitative and restorative clinical care in a bed-based program, with the goal of helping eligible veterans achieve and maintain the highest level of functioning and independence possible. VA offers grants to non-VA organizations in the community to provide supportive housing programs and supportive service centers for homeless veterans. Once programs are established, VA provides per diem payments to help offset operational expenses of the program. Grant and per diem liaisons oversee services provided by these organizations. Mental health intensive case management teams are designed to deliver high- quality services that: (1) provide intensive, flexible community support; (2) improve health status (reduce psychiatric symptoms and substance abuse); (3) reduce psychiatric inpatient hospital use and dependency; (4) improve community adjustment, functioning, and quality of life; (5) enhance satisfaction with services; and (6) reduce treatment costs. The Mentally Ill Chemically Addicted program, intended to assist underserved veterans with serious and persistent mental illnesses, involves recovery- and rehabilitation-oriented services in Network 17 as well as training on the recovery model and psychosocial rehabilitation concepts and skills. Operation Iraqi Freedom (OIF) and Operation Enduring Freedom (OEF) Assessment, preventative, and early intervention mental health services for veterans returning from combat in Iraq, Afghanistan, and other areas. These services involve outreach and education efforts, as well as a range of psychosocial support services. The Peer Housing Location Assistance Group pilot program is a recovery- oriented program that encourages and enables each veteran to take responsibility and initiative to choose and apply for as many housing opportunities as his or her eligibility characteristics, preferences, and motivation permit. The program aims to help participants manage the process and frustrations of finding and maintaining safe and secure housing through a combination of information, problem-solving, encouragement, professional assistance, and peer support. A collaborative venture between the North Texas Health Care System and the Texas Correctional Office on Offenders with Medical and Mental Impairments that provides active outreach and case management services to veterans with diagnosed mental illness being released from the Texas prisons and involves work with the Texas diversion courts for mentally ill offenders to provide outreach and case management services for veterans convicted of minor offences who have been diagnosed with mental illness. Polytrauma Rehabilitation Centers provide comprehensive interdisciplinary rehabilitation and coordinated complex medical, surgical, and mental health care, as well as long-term follow-up, to veterans of OIF and OEF who have sustained severe injuries and have complex rehabilitation needs. Specialized services for veterans returning from Iraq and Afghanistan, as well as veterans from past service eras, including the Vietnam War. As part of VA’s overall coordination of postdeployment programs, PTSD services are focused on veterans who are survivors of traumatic events and require comprehensive treatment. A comprehensive approach to restoring a veteran’s full potential following the onset of serious mental illness. This approach involves assisting the veteran in all aspects of normal life to attain the highest level of functioning in the community; it includes such components as patient and family education; enhancement of residential, social, and work skills; cognitive behavioral therapy; motivational interviewing, integrated dual diagnosis treatment, and provision of intensive case management when needed. Safety, security, privacy, access, and infrastructure improvements to domiciliary and residential rehabilitation treatment programs, including repairs, renovations, furnishings, appliances, equipments, household goods, and program supplies and materials. A special emphasis for a component of these funds was improving access to these mental health residential programs for women veterans. Stand Downs are typically 1 to 3 day events that provide services to homeless veterans such as food, clothing, health screenings, VA and Social Security benefits counseling, and referrals to a variety of other supportive services such as housing, employment, and substance abuse treatment. Stand Downs are collaborative events that are coordinated between local VA facilities, other government agencies, and community agencies that serve the homeless. Specialized services for veterans with substance abuse disorders such as alcoholism and drug addictions. These services, for example, are provided in residential rehabilitation treatment programs. Initiative designed to obtain causes of death for veterans who have died in recent years, to identify those who have died from suicide and related causes, to identify risk factors, and to evaluate regional and local variability in rates and risk factors. The goal is to obtain information that can guide evidence-based efforts at suicide prevention, nationally and at other levels. Special needs funding for medical supplies, equipment, office furniture, and modular buildings for Gulf Coast VA mental health programs that sustained damage due to Hurricane Katrina. Telemental health uses electronic communications and information technology to provide and support mental health care where geographic distance separates the clinicians and patients. These services are often used in rural areas where the availability of mental health providers is limited. Initiative to develop an interactive set of web-based tools to allow veterans who have behavioral or mental health concerns to track important aspects of their self-care and professional care. In addition to the contact named above, Debra Draper, Assistant Director; James Musselwhite, Assistant Director; Jennie Apter; Robin Burke; and Steven Gregory made key contributions to this report. VA Health Care: Preliminary Information on Resources Allocated for Mental Health Strategic Plan Initiatives. GAO-06-1119T. Washington, D.C.: September 28, 2006. VA Health Care: Budget Formulation and Reporting on Budget Execution Need Improvement. GAO-06-958. Washington, D.C.: September 20, 2006. VA Long-Term Care: Data Gaps Impede Strategic Planning for and Oversight of State Veterans’ Nursing Homes. GAO-06-264. Washington, D.C.: March 31, 2006. VA Long-Term Care: Trends and Planning Challenges in Providing Nursing Home Care to Veterans. GAO-06-333T. Washington, D.C.: January 9, 2006. VA Health Care: VA Should Expedite the Implementation of Recommendations Needed to Improve Post-Traumatic Stress Disorder Services. GAO-05-287. Washington, D.C.: February 14, 2005. VA Long-Term Care: Oversight of Nursing Home Program Impeded by Data Gaps. GAO-05-65. Washington, D.C.: November 10, 2004. VA and Defense Health Care: More Information Needed to Determine If VA Can Meet an Increase in Demand for Post-Traumatic Stress Disorder Services. GAO-04-1069. Washington, D.C.: September 20, 2004. VA Health Care: Resource Allocations to Medical Centers in the Mid South Healthcare Network. GAO-04-444. Washington, D.C.: April 21, 2004. Department of Veterans Affairs: Key Management Challenges in Health and Disability Programs. GAO-03-756T. Washington, D.C.: May 8, 2003. VA Health Care: Allocation Changes Would Better Align Resources with Workload. GAO-02-338. February 28, 2002. Agencies’ Annual Performance Plans Under the Results Act: An Assessment Guide to Facilitate Congressional Decisionmaking, Version 1. GAO/GGD/AIMD-10.1.18. Washington, D.C.: February 1998. Managing for Results: Critical Issues for Improving Federal Agencies’ Strategic Plans. GAO/GGD-97-180. Washington, D.C.: September 16, 1997. Business Process Reengineering Assessment Guide, Version 3. GAO/AIMD-10.1.15. Washington, D.C.: May 1997. Agencies’ Strategic Plans Under GPRA: Key Questions to Facilitate Congressional Review, Version 1. GAO/GGD-10.1.16. Washington, D.C.: May 1997.
The Department of Veterans Affairs (VA) provides mental health services to veterans with conditions such as post-traumatic stress disorder (PTSD) and substance abuse disorders. To address gaps in services needed by veterans, VA approved a mental health strategic plan in 2004. VA planned to increase its fiscal year 2005 allocations for plan initiatives by $100 million above fiscal year 2004 levels and its fiscal year 2006 allocations for plan initiatives by $200 million above fiscal year 2004 levels. GAO was asked to provide information on VA's allocation and use of funding for mental health strategic plan initiatives in fiscal years 2005 and 2006, and to examine the adequacy of how VA tracked spending and the extent of spending for plan initiatives. GAO reviewed VA reports and documents on plan initiatives and conducted interviews with VA officials at headquarters, 4 of 21 health care networks, and seven medical centers. VA networks provide oversight of medical center operations and most medical center resources. In fiscal year 2005, VA headquarters allocated about $88 million of the $100 million above fiscal year 2004 levels that VA officials intended for mental health strategic plan initiatives. VA allocated about $53 million directly to medical centers and certain offices based on proposals submitted for funding and other approaches targeted to specific initiatives. VA solicited proposals from networks for initiatives to be carried out at medical centers through requests for proposals (RFP). In addition, VA headquarters officials said that VA allocated $35 million for plan initiatives through VA's general resource allocation system to its 21 health care networks on a retrospective basis, several months after resources had been provided to the networks though the general resource allocation system. VA did not notify network and medical center officials that these funds were to be used for plan initiatives. Network and medical center officials interviewed told GAO that they were not aware these allocations had been made. As a result, it is likely that some of these funds were not used for plan initiatives. VA did not allocate the approximately $12 million remaining of the $100 million for fiscal year 2005 because, according to VA officials, there was not enough time during the fiscal year to do so. Medical center officials said they used funds allocated for plan initiatives for new services and for enhancement of existing services. For example, two medical centers increased the number of mental health providers at community-based outpatient clinics. However, some medical center officials reported they did not use all funds allocated by the end of the fiscal year, due in part to the time it took to hire staff. In fiscal year 2006, VA headquarters allocated about $158 million of the $200 million above fiscal year 2004 levels intended for mental health strategic plan initiatives directly to medical centers and certain offices. VA allocated about $92 million of these funds to support new initiatives, using RFPs and other targeted funding approaches. VA also allocated about $66 million to support recurring costs of continuing initiatives from the prior fiscal year. About $42 million of the $200 million for fiscal year 2006 was not allocated. Officials from seven medical centers GAO interviewed reported they had used funds for plan initiatives, such as the creation of a new case management program. Officials at some medical centers reported they did not anticipate problems using all of the funds allocated within the fiscal year; however, officials at other medical centers were less certain they would be able to do so. VA tracking of spending for plan initiatives was inadequate. In fiscal year 2005, VA did not track such spending. In fiscal year 2006, VA tracked aspects of plan initiatives but not dollars spent. However, available information indicates that VA spending for plan initiatives was substantially less than planned. In fiscal year 2006, VA medical centers returned to headquarters about $46 million of about $158 million allocated for plan initiatives because they could not spend the funds that year. However, VA cannot determine to what extent the approximately $112 million remaining was spent on plan initiatives because it did not track specifically how these funds were spent.
Surveillance radars allow air traffic controllers to manage aircraft operating in the airspace around airports and to expedite the flow of air traffic into and out of airports by reducing the separation between aircraft. Currently, radar coverage for the Cherry Capital Airport is provided by a long-range surveillance radar in Empire, Michigan, 20 miles away from the airport. Although the radar is located near the Cherry Capital Airport, its signals are transmitted over 300 miles away to the Air Route Traffic Control Center in Minneapolis, where the controllers there are responsible for using instrument flight or radar rules to control the aircraft approaching and departing the airport outside a 5-mile radius of the airport. Controllers at the Cherry Capital Airport use visual flight rules or visual procedures to manage aircraft within the 5-mile radius during the normal tower operating hours from 7 a.m. to 10 p.m. However, aircraft are allowed to take off and land at the airport when the tower is closed. FAA conducted a study in 1994 to assess the benefits and costs of installing a surveillance radar at the airport. The results showed that the potential benefits of installing a radar exceeded the costs. Therefore, FAA concluded that the airport qualified for a radar. Because no radar was available and funds were unavailable to purchase a new radar, FAA added the airport to a waiting list of other qualifying airports. At the request of Members of Congress, FAA conducted another benefit-cost study in 1996 to determine whether the airport still qualified for a radar. The results of that study showed that the costs exceeded the benefits, thereby disqualifying the airport for a radar, and FAA removed the airport from its waiting list of qualifying airports. At our request, FAA conducted another benefit-cost study in 1997 to determine whether the airport qualified for a surveillance radar. That study’s results also showed that the costs exceeded the benefits and that the airport did not qualify for a radar. FAA uses a multifaceted process to determine which airports should get surveillance radars. (See fig. 1.) First, FAA officials at the airport identify an operational need—such as the need to reduce delays to aircraft taking off and landing and the risks of midair and terrain collisions—that they believe a surveillance radar would satisfy. They then submit a written request to the appropriate FAA regional office. FAA airport officials identify need? Did regional FAA officials validate airports? Did airport meet the benefit-cost ratio of 1.0 or greater? Airport no longer considered for radar needs? available? Surveillance radar installed Second, FAA regional officials review the request to determine whether an operational need exists, assess the airport’s need relative to those of other airports in the region, and prioritize all airports within the region that have valid radar needs. If regional officials determine that a need exists, the request is forwarded to FAA headquarters. They also include an estimate of the equipment and annual operating costs in the region’s annual budget. If they determine that an operational need does not exist, the airport is no longer considered a potential candidate for a surveillance radar. Third, FAA headquarters officials use the agency’s Investment Criteria for Airport Surveillance Radar, dated May 1983, to determine whether an airport identified by the regional officials as a candidate for a radar meets FAA’s cost-effectiveness criteria. Specifically, the officials conduct a detailed study using site-specific air traffic data, along with estimated equipment and operating costs, to assess the potential benefits and costs for installing a radar at the airport. If the benefits exceed the costs, further consideration is given to the request. If the costs exceed the benefits—that is, if the benefit-cost ratio is less than 1.0—the airport is no longer considered a potential candidate for a surveillance radar. Fourth, FAA headquarters officials validate the operational needs by considering, among other things, the level of air traffic operations at the airport and the complexity of its airspace compared with those of other airports nationwide. If the officials conclude that a radar is needed, the request is approved. If FAA headquarters cannot validate the operational needs, the airport is no longer considered a potential candidate for a surveillance radar. Finally, if a radar is available from another airport where an upgraded radar has been installed, or if funds are available to purchase a new radar, the radar is acquired and installed at the airport. Otherwise, the airport is placed on a waiting list. Once radars or funds become available, however, FAA must determine whether the airports on the waiting list still meet its cost-effectiveness criteria by using the latest air traffic operations data. Airports that do not meet the criteria are no longer considered candidates for a surveillance radar. In addition to the radar requests initiated by FAA airport and regional officials, the Congress may mandate that a surveillance radar be installed at an airport. If the Congress designates funds with the mandate, the request does not have to follow FAA’s decision-making process. If the Congress does not designate funds, however, the request must follow the process, according to FAA headquarters officials. The Congress has mandated that FAA install surveillance radars at eight airports. These airports are included in appendix I. Although FAA’s decision-making process was in place in 1994, agency officials did not follow it before concluding that the Cherry Capital Airport qualified for a radar. For example, after conducting the 1994 benefit-cost study and determining that the airport met FAA’s cost-effectiveness criteria, agency officials prematurely concluded that Cherry Capital qualified for a radar. They did not assess the airport’s operational needs relative to the needs of other airports or consider the radar coverage already provided by the long-range surveillance radar nearby in Empire, Michigan. According to FAA officials, if these factors had been considered, the Cherry Capital Airport would not have qualified for a surveillance radar. The officials also told us that even if the airport had a benefit-cost ratio of 1.0 or greater, it still would not get a surveillance radar because other airports have greater operational needs and the airport already receives better radar coverage than many airports that have surveillance radars on site. They added that if a radar was installed at the airport, its signal would most likely be transmitted to another air traffic control facility where other controllers would be responsible for controlling aircraft approaching and departing the Cherry Capital Airport, an arrangement similar to the present one at the airport. In accordance with its decision-making process, FAA used its investment criteria to identify the factors to consider when conducting the 1994, 1996, and 1997 benefit-cost studies for the Cherry Capital Airport. The officials calculated benefit-cost ratios of 1.66 in 1994, 0.68 in 1996, and 0.78 in 1997, which resulted in the airport meeting FAA’s cost-effectiveness criteria in 1994, but not in 1996 and 1997. We found that an overstatement of air traffic growth was the primary reason the airport met the investment criteria in 1994. FAA officials considered the potential efficiency and safety benefits, estimated the equipment and annual operating costs, and projected air traffic operations when conducting the benefit-cost studies. To calculate the efficiency and safety benefits of installing a surveillance radar, FAA considered travelers’ time saved because of the potential reductions in the delays to aircraft and the lives saved and injuries avoided because of the reductions in the risks of midair and terrain collisions. To compute the benefits represented by reduced delays to aircraft and collision risks, FAA used projections of air traffic operations at the airport, the average time required for aircraft takeoffs and landings, and the percentage of time that weather conditions at the airport would require controllers to use radar to manage the air traffic. To compute the equipment and annual operating costs, FAA estimated the costs for the acquisition and installation of the radar and the annual costs for controller and support staff salaries, training, utilities, and for maintenance. The benefits and the annual operations and maintenance costs were estimated over a 15-year period and discounted to the present time using the discount rate published by the Office of Management and Budget. FAA used both national and site-specific data to compute the benefits and costs. For example, the values for travelers’ time saved, lives saved, and injuries avoided were national data published annually by the Department of Transportation. The estimated costs for acquiring the radar were FAA’s purchase price for the surveillance radar plus other necessary equipment and personnel training costs. The projections of air traffic operations were specific to the Cherry Capital Airport. Although the results of benefit-cost studies depend on several factors, FAA officials told us that the projections of air traffic operations—particularly aircraft operations controlled by instrument flight or radar rules—were the most critical factors because they affect the level of benefits that would be achieved as a result of having a surveillance radar at the airport. They commented that there was a direct correlation between the projections of air traffic operations and the benefits—as air traffic increases, so do the potential for delays to aircraft and the risks of collision, and, thus, the benefits of installing a radar at the airport also increase. In particular, we found that FAA’s criteria give more weight to aircraft, such as air carriers and commuter aircraft, that carry the largest number of passengers because the higher the number of passengers, the greater the potential efficiency and safety benefits to be achieved from saving travelers’ time and avoiding collisions that could cause injuries and deaths. Therefore, according to FAA headquarters officials, the potential efficiency and safety benefits calculated for having a surveillance radar at the Cherry Capital Airport, which is mainly a general aviation airport, would be less than those calculated for airports that service a larger number of commercial air carriers and commuter aircraft. FAA considered the installation of the same type of surveillance radar in all three of its studies on the Cherry Capital Airport. We found, however, that the estimated equipment costs in the 1997 study were over $8 million higher than the costs included in the other studies. Specifically, the equipment costs in the 1994 and 1996 studies totaled about $12.9 million and $13.5 million, respectively; whereas, the equipment costs totaled $22 million in the 1997 study. In contrast, the annual operating costs in the 1994 and 1996 studies totaled $611,000 and $677,000, respectively, compared with $167,000 in the 1997 study. FAA could not explain why such significant differences existed in the cost figures or provide documentation to support the costs included in the 1994 and 1996 studies. They did, however, provide support for the costs included in the 1997 study. FAA headquarters officials speculated that the costs differed because the 1994 and 1996 studies only included the costs for a surveillance radar and not the costs for the necessary auxiliary equipment. To develop the air traffic projections in the 1996 and 1997 studies, FAA officials considered the historical air traffic growth at the Cherry Capital Airport and the mix of aircraft using the airport. As shown in table 1, they assumed that air traffic at the airport would grow, on average, about 1 percent annually. The FAA officials were uncertain about how the higher projections in the 1994 study were developed. They told us that the original projections were probably based on historical data, but were adjusted upward based on input from headquarters, regional, and district officials to reflect a 4.2-percent projected average annual growth rate, also shown in table 1. We could not determine the basis for the adjustments because FAA did not maintain supporting documentation. Nevertheless, FAA headquarters and regional officials, as well as the FAA officials and controllers at the Cherry Capital Airport, all agreed that the 1994 projections were overstated. For the 1996 and 1997 studies, FAA based its projections on actual air traffic growth at the airport over the 10-year periods preceding the 1996 (1986 through 1995) and 1997 (1987 through 1996) studies. As shown in table 2, the actual annual growth of air traffic from fiscal year 1986 through fiscal year 1996 ranged from an increase of 22.5 percent to a decrease of about 6.5 percent. According to FAA officials, the large increase in air traffic in fiscal year 1987 was due to the introduction of new air carrier service at the airport. Because the officials did not expect such a large increase in air traffic to reoccur in future years, they excluded the surge in air traffic in fiscal year 1987 from the air traffic projections in the 1996 and 1997 studies. Therefore, the resulting average annual growth rate used in the 1996 and 1997 studies was about 1 percent. Also, as illustrated in tables 1 and 2, the 128,704 projected air traffic operations included in the 1996 study more closely tracked the 128,419 actual operations that occurred in 1996 than the 148,000 operations projected in the 1994 study. Even so, the 123,957 actual air traffic operations reported for fiscal year 1997 were considerably less than the 152,000 projected in the 1994 study, the 130,078 projected in the 1996 study, and the 130,318 projected in the 1997 studies. Since air traffic projections were the most critical factors influencing the results of the benefit-cost studies for the Cherry Capital Airport, we requested air traffic projections developed by the state of Michigan and Traverse City transportation planning officials to determine what impact their projections would have had on the results of FAA’s 1997 study. We found, however, that the state and local officials relied routinely on FAA’s air traffic projections and, therefore, that using their projections would not have had any impact on the 1997 study results. We did, however, identify another set of air traffic projections developed in 1996 (based on 1994 actual air traffic data), which had been used by two consulting firms. The firms used the projections in studies conducted for the Michigan Department of Transportation and the Northwestern Regional Airport Commission to identify facility improvements needed at the Cherry Capital Airport, such as expanding the terminal building and parking areas. The projections the firms used were based on a higher annual air traffic growth rate and a higher baseline of air traffic operations than FAA’s projections. Whereas FAA projected an average annual growth rate of 1 percent in its 1996 and 1997 studies, the firms projected a growth rate of about 1.5 percent. Also, FAA’s actual air traffic count of 124,000 for 1994 included only aircraft operations that were managed by the Cherry Capital and the Minneapolis controllers. The firms added 18,000 operations to FAA’s air traffic count by including an estimate of aircraft operations that were not managed by the controllers because they occurred at Cherry Capital when the tower was closed. While the firms’ count might have been appropriate for determining facility needs, FAA’s count was more appropriate for determining radar needs. Nonetheless, we asked FAA to conduct a benefit-cost study using the firms’ projections to determine the impact on the 1997 study. When the air traffic projections developed by the firms were used, they produced a benefit-cost ratio of 1.35, which exceeded the minimal threshold for meeting FAA’s cost-effectiveness criteria to qualify for a surveillance radar. However, as mentioned previously, FAA officials told us that even if the airport were to achieve a benefit-cost ratio of 1.0 or greater, it still would not get a surveillance radar because other airports have greater operational needs and the airport already receives better radar coverage than many other airports that have surveillance radars. In response to the safety concerns raised by Members of Congress and controllers at the Cherry Capital Airport, such as the greater risk of aircraft collisions that results from increased air traffic, FAA installed a Terminal Automated Radar Display and Information System (TARDIS) in 1997 to help the controllers locate and identify aircraft approaching or departing the airspace around the airport. The TARDIS is a commercial, off-the-shelf system that consists of a computer, monitor, and software costing about $23,000. Although the system displays data, such as aircraft speed and altitude, received directly from the surveillance radar in Empire, Michigan, the Cherry Capital controllers can only use it as a visual aid and cannot use it to control or separate aircraft. According to FAA regulations, the Cherry Capital Airport controllers can only use visual procedures or visual flight rules to track aircraft. Controllers at the Cherry Capital Airport told us that the TARDIS has helped them manage air traffic better, but that they have had difficulty using it. They said that, on occasion, the information the TARDIS has displayed on aircraft identification and altitude, for example, has overlapped and has sometimes been unreadable. FAA headquarters and regional officials agreed that the data display problem exists occasionally but said that it is not unique to the TARDIS at the Cherry Capital Airport. They commented that the problem does not compromise safety at the airport because the additional equipment is only intended to be used as a visual aid and not to control air traffic. Moreover, the Minneapolis controllers use the radar in Empire to track aircraft flying under instrument flight rules until control of the aircraft is switched, via radio contact, to the Cherry Capital controllers. The switch usually occurs within a 5- or 10-mile radius of the airport. Also, FAA’s regulations require that pilots contact the Cherry Capital controllers prior to entering the airport’s airspace. According to the officials, the TARDIS provides two benefits to the Cherry Capital controllers—enhanced traffic monitoring capabilities and data directly from the radar in Empire. Even if the automated system at the Minneapolis facility fails, the TARDIS would still receive data from the Empire radar. Beginning in 1999 and continuing through 2004, FAA plans to retire all of the older airport surveillance radars (ASR), specifically ASR-7 and ASR-8, which were installed in the 1960s and 1970s. These radars, currently located at 101 airports, will be replaced as part of FAA’s efforts to modernize its air traffic control system with new, technologically advanced ASR-11 radars, which cost over $5 million each. During our review, we found that 75 of the 101 airports scheduled to have their radars upgraded had fewer total air traffic operations than the Cherry Capital Airport in 1996 and that FAA will spend well over $375 million to purchase replacement radars for these airports. This cost does not include the additional expenditures for auxiliary equipment and for the modifications to airport infrastructure required for the effective operation of the radars. We noted that FAA officials routinely conduct benefit-cost studies using air traffic operations as one of the critical factors in deciding whether it would be cost-effective to install surveillance radars at airports without radars. Yet FAA officials did not conduct similar studies to determine whether it would be cost-beneficial to replace all of the existing ASR-7 and ASR-8 radars, to prioritize replacement of the radars, or to assess whether the circumstances that initially warranted installation of the radars at the airports had changed over the years. The officials agreed that the results of benefit-cost studies would be a relevant factor in deciding whether to install the replacement radars. But they said they have no plans to conduct such studies because they believe that it would be very difficult to discontinue radar operations at an airport found not to qualify because the public’s perception would be that safety was being reduced, even if safety was not compromised and other circumstances warranted the discontinuance of radar operations. FAA’s past practice has been that once an airport gets a radar, it qualifies for a replacement radar regardless of changes in the air traffic or the other circumstances that initially warranted the radar. Although FAA has criteria for discontinuing radar operations, the agency has never done so. FAA officials also explained that there may be other important reasons, besides cost-effectiveness, for replacing or installing a radar at an airport. These reasons include an airport’s location; the complexity of the airspace surrounding an airport; the capacity of an airport to serve multiple satellite airports; the capacity of an airport to provide relief capacity to hub or major airports on an as needed basis; and national security. We asked FAA for documentation of the operational needs that showed why the radars were installed initially at the 75 airports with fewer total air traffic operations than the Cherry Capital Airport that are scheduled to have their radars replaced. In response, FAA headquarters officials contacted the airports to obtain information on the rationale for installing the radars. Among the reasons FAA provided were that some of the airports provide radar services to the Air National Guard, military bases, and multiple satellite airports or serve as alternates for major airports or that the radars are the only sources for radar coverage in mountainous areas. FAA also cited congressional interest as a reason for installing surveillance radars at some airports. We were unable to verify the validity of FAA’s rationales because FAA did not have records dating back to the 1960s and 1970s to document why the radars were installed. FAA’s information, however, shows that at some of the airports, the circumstances that originally justified the installation of radars no longer exist. See appendix II for a list of the 75 airports and more details about FAA’s justifications for the initial installation of the radars in 1960s and 1970s. Although installing and retaining radars at some of the airports with fewer total air traffic operations than the Cherry Capital Airport might be justified, conducting benefit-cost studies and revalidating the operational needs would ensure that (1) radars are installed or replaced first at the airports that have the greatest needs and (2) FAA is not spending millions of dollars to replace radars when continued operation of the existing radars might not be justified. Since FAA already has a process in place for conducting benefit-cost studies, we believe that the time and costs associated with conducting similar studies to determine the effectiveness of replacing existing radars would be minimal. An overstatement of projected air traffic growth was the primary reason the Cherry Capital Airport met FAA’s cost-effectiveness criteria in 1994, and agency officials prematurely concluded that the airport qualified for a surveillance radar. FAA officials expected a higher rate of growth for air traffic at the airport in future years, and as a result, the potential benefits of installing a radar were greater than the costs. If FAA had included less optimistic air traffic projections in its 1994 study, the Cherry Capital Airport would not have met the agency’s cost-effectiveness criteria. Furthermore, if FAA had followed its decision-making process by assessing the airport’s needs relative to other airports’ needs and considered the existing radar coverage, the airport would not have been considered for a surveillance radar. Even if the benefits exceeded the costs, there was no guarantee that the airport would get a radar because of the competing needs of other airports within the region and the quality of service that the radar in Empire, Michigan, already provides to the Cherry Capital Airport. Safety and confidence in the national airspace system are very important, and several factors must be considered when making decisions regarding the installation and replacement of surveillance radars. However, FAA’s current plans to install replacement radars without conducting benefit-cost studies and revalidating operational needs may result in the agency spending millions of dollars to replace radars at airports with fewer air traffic operations than the Cherry Capital Airport, which does not meet FAA’s cost-effectiveness criteria for having a radar. FAA’s perceived difficulties in discontinuing radar operations at an airport only elevate the need for conducting benefit-cost studies and assessing the operational needs. We believe that conducting benefit-cost studies and assessing operational needs before replacing the radars would allow FAA to obtain the convincing data needed to ensure that the equipment is installed at the airports that have the greatest needs and that FAA could use the data to prioritize the installation of the radars at qualifying airports. In addition, conducting these analyses would give FAA the opportunity to reassess the benefits and costs of replacing the equipment and ensure that funds are not spent to modernize radars at airports where continued radar operations might not be justified. Because of current budget constraints and the future expenditures associated with installing radars as part of the effort to modernize the nation’s air traffic control system, we recommend that the Secretary of Transportation direct the Administrator of the Federal Aviation Administration to conduct benefit-cost studies to validate the cost-effectiveness and revalidate the need for the radars at airports scheduled to receive replacement radars and to use the results of the studies in prioritizing the replacement of the radars at qualifying airports. Furthermore, the Federal Aviation Administration should advise the Congress on the results of these studies for its consideration during deliberations on the Department of Transportation’s budget request. We provided copies of a draft of this report to the Department of Transportation and the Federal Aviation Administration for review and comment. We met with Federal Aviation Administration officials, including the Project Leader, Integrated Product Team/Terminal Surveillance Program, Communications, Navigation, Surveillance, and Infrastructure Directorate, Air Traffic Services; and Business Manager, Integrated Product Team/Terminal Surveillance Program, Office of Communication, Navigation, and Surveillance Systems, Research and Acquisitions. We also met with Department of Transportation officials from the Offices of the Assistant Secretaries for Administration and for Budget and Program Performance. The agencies generally agreed with the findings, conclusions, and recommendation presented, but commented that we should include information in the report on instrument flight rule operations and ASR-9 radars located at airports that had fewer total air traffic operations than the Cherry Capital Airport in 1996. Specifically, the agencies noted that instrument flight rule operations may be a better indicator of the need for a radar at airports than total air traffic operations and, thus, could have an impact on the results of benefit-cost studies. In addition, they commented that some airports that currently have ASR-9 surveillance radars, which were installed in the 1980s, also had fewer total air traffic operations than the Cherry Capital Airport did in 1996. Although the Federal Aviation Administration currently has no plans to replace these radars, the agencies noted that the equipment will need to be replaced over the next 10 years. The Federal Aviation Administration reiterated that the results of benefit-cost studies also could be used to revalidate the operational needs for the radars before they are replaced. However, the agency has no plans to conduct such studies for these airports. In response to the agencies’ comments, we included more detailed information about the airports that currently have ASR-9 radars in appendix I and information about airports’ instrument flight rule operations in appendix II. The agencies also suggested several changes to improve the accuracy and clarity of the report that we incorporated where appropriate. We performed audit work at FAA’s headquarters in Washington, D.C.; the Great Lakes Regional Office in Chicago; the Air Route Traffic Control Center in Minneapolis; and the Cherry Capital Airport in Traverse City, Michigan. To determine what process FAA currently has in place for determining which airports that do not have radars may be eligible for surveillance radars, we interviewed officials at FAA’s headquarters, regional, and airport offices; and reviewed and analyzed pertinent FAA criteria, regulations, procedural, and other guidance documents. To identify the factors FAA considered when conducting the 1994, 1996, and 1997 benefit-cost studies, we analyzed the studies and supporting documents, FAA’s Investment Criteria for Airport Surveillance Radar, dated May 1983, and other guidance documents for conducting such studies. We interviewed FAA headquarters officials currently responsible for conducting benefit-cost studies. We also obtained information on the factors FAA considered when developing air traffic projections, analyzed the projections, and compared actual and projected air traffic operations. In addition, we interviewed representatives of local planning and public interest groups located in the Traverse City area that were familiar with the Cherry Capital Airport’s air traffic operations to obtain information on past and anticipated air traffic growth, the need for a surveillance radar, and the safety concerns at the airport. To determine the impact other air traffic projections would have had on the results of FAA’s 1997 benefit-cost study, we interviewed FAA officials and controllers working at the Cherry Capital Airport, officials of the Michigan Department of Transportation and the Traverse City Planning Commission, and representatives of two aviation consulting firms. We obtained air traffic projections from the consulting firms and had FAA headquarters officials conduct sensitivity analyses using the projections. Although we evaluated what impact the projections would have had on the results of the 1997 study, we did not evaluate the methodologies used by the consulting firms to develop their projections because this was not part of the scope of our review. To determine what actions FAA has taken to address the safety concerns raised by Members of Congress, air traffic controllers, and local citizens, we obtained information on the operational capabilities of the TARDIS and on how the equipment is intended to be used through interviews with FAA headquarters and regional officials, the Cherry Capital controllers, and airport officials. In addition, we collected data from FAA that identified the airports with fewer total air traffic operations than the Cherry Capital Airport in 1996 that are scheduled to receive replacement surveillance radars. We discussed with FAA headquarters officials the rationales for initially installing surveillance radars at the airports and when the existing radars are scheduled to be replaced. However, we did not contact representatives at the airports to verify the information provided by FAA headquarters officials. We also obtained data on airports that currently have ASR-9 radars and fewer total air traffic operations than the Cherry Capital Airport. We performed our review from October 1997 through May 1998 in accordance with generally accepted government auditing standards. We are providing copies of this report to interested congressional committees; the Secretary of Transportation; the Administrator, FAA; and the Members of Congress representing the Traverse City area. We will also make copies available to others on request. If you or your staff have any questions or need additional information about this report, please call me at (202) 512-2834. Major contributors to this report are listed in appendix III. Nantucket Memorial (Nantucket, MA) Theodore Francis Green State (Providence, RI) Portland International (Portland, ME) Spokane International (Spokane, WA) Atlantic City International (Atlantic City, NJ) Fort Wayne International (Fort Wayne, IN) Roswell Industrial Air Center (Roswell, NM) Charlottesville-Albermarle (Charlottesville, VA) Cedar Rapids Municipal (Cedar Rapids, IA) Harrisburg International (Harrisburg, PA) — Provides coverage and services in challenging terrain environment (continued) Walker Field (Grand Junction, CO) Huntsville International-Carl T. Jones Field (Huntsville, AL) Rogue Valley International (Medford, OR) Rio Grande Valley International (Brownsville, TX) Lynchburg Regional-Preston Glenn Field (Lynchburg, VA) Fayetteville Regional/Grannis (Fayetteville, NC) Missoula International (Missoula, MT) Eastern West Virginia Regional-Shepard Field (Martinsburg, WV) — Congressional mandate in 1991 — Radar signal will be remoted to the terminal radar approach control facility at Dulles International Airport (Table notes on next page) Not applicable. Not available. Scheduled date for installing radarreliever airport for Detroit Metropolitan Airport — Provides services to corporate travelers, including General Motors and services for a large naval flight training center interest— Radar was installed after an accident and services for a military base — Former Air Force base and services for a military base — Former Air Force base provided approach control services for the Department of Defense’s Strategic Air Command Base and services in mountainous terrain — Flight school at airport — Provides services to numerous satellite airports — Low-visibility airport during winter months (continued) Scheduled date for installing radarand services to the Air National Guard and military bases and services to the Air National Guard and services for military bases 95,129 — Air route traffic control center does not have adequate coverage of the airspace — Formerly a hub for Ozark Airlines and services to Air National Guard base — Airport has capability to provide air route traffic control services interest — Provides services to corporate travelers, including Goodyear Corporation and Timkin Roller Bearing 123,894 — Air National Guard (continued) Scheduled date for installing radarand services for the Camp Grant military base — Provided services for military training flights because Chicago O’Hare could not accommodate these aircraft -A hub for United Parcel Service 93,875 — Provides coverage in mountainous terrain — Air route traffic control center does not have adequate coverage of the airspace 80,435 — Provides coverage in mountainous terrain — No long-range surveillance radar coverage available and services for military training interest — Provides coverage and services for the Air Force Reserves interest — Provides coverage in mountainous terrain (continued) Scheduled date for installing radarand services to the largest B1 bomber base — Provides services to five satellite airports and services for military practice approaches — Provides services to satellite airports — Provides services to the largest civil fleet of helicopters 27,441 — Alternate airport for Honolulu — Island with highest terrain; heavy rainfall area, limited visibility — Stopover for flights to and from Australia and New Zealand 97,804 — Stopover airport for flights from Europe — Alternate airport for Boston Logan International 61,011 — Air National Guard fighter wing — Minihub for air cargo operations — Largest city in South Dakota interest — Provides services to satellite airports 144,338 — FAA takeover of a when new airport was built in 1963 to cover growth in general aviation, military, and air carrier traffic (continued) Scheduled date for installing radarand services to the military — Previously provided approach control services to the Department of Defense’s Strategic Air Command base to 12 satellite airports — Provides coverage and services to 100 scheduled air carriers daily 66,418 — Only airport with primary radar within 120 miles — Provides services to the Air National Guard for a combined air traffic control center and terminal radar approach control facility in San Juan, PR to 18 satellite airports — Alternate airport for Chicago O’Hare — Provides overflight services to and from Chicago O’Hare congressional mandate; however, no documentation available — Provides support for surrounding restricted military area activities (continued) Scheduled date for installing radarinterest — Provides coverage in mountainous terrain — Moody Aviation trains bush pilots for missionary work to 10 satellite airports — Air route traffic control center does not have adequate coverage of the airspace for Longview and Tyler, TX, airports — Provides services for oil industry business jet air traffic 102,407 — Provides service to corporate travelers, including John Deere Corporation for air taxi and military approaches 103,273 — Provides coverage in mountainous terrain 221,673 — Provides coverage and services to an Air National Guard base — Provides approach control services for northwest Arkansas, including Fayetteville, AR to four satellite airports and to military air traffic (continued) Scheduled date for installing radar69,160 — Provides coverage in mountainous terrain — Provides services to satellite airports — Previously provided approach control services to the Seneca Army Air Depot to nine satellite airports — Third-largest approach control facility and fourth-busiest airport in the state and services to the Air National Guard — Provides coverage and services to overflow and diverted traffic from Minneapolis — Previously provided services to two Air Force base squadrons and services to the Air National Guard and four satellite airports (continued) Scheduled date for installing radarfor the Mayo Clinic, including lifeguard flights — Alternate airport for Minneapolis airport — Airport has one of the few Global Positioning System Heliport instrument approaches — Provides services for large cargo operations to the Air National Guard and satellite airports — Provides tower air route traffic control services and services to Fort Benning Military Base — Provides services to 19 satellite airports — Sequences turboprops and props into Atlanta Hartsfield and services to the Air National Guard — Provides coverage for detecting and interdicting aircraft involved in illegal drug activities 82,573 — FAA takeover of a Department of Defense site — Provides services for military practice approaches (continued) Dallas-Fort Worth Metroplex — Provides air traffic relief services to an air route traffic control center and services to the Air National Guard — Indiana State University student pilot training — Midnight freight operations coverage gap for terminal radar approach control in Houston interest — Provides services for oil industry related air traffic — Provides coverage in mountainous terrain 63,842 — Provides coverage in mountainous terrain (continued) Not applicable. Janet Barbee Sharon Dyer Wanda Hawkins Mehrzad Nadji John Thomson The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the benefit-cost studies that the Federal Aviation Administration (FAA) conducted for the Cherry Capital Airport in 1994, 1996, and 1997, focusing on: (1) FAA's decisionmaking process for installing surveillance radars at airports; (2) the factors, including costs, benefits, and air traffic projections, that FAA considered when conducting the 1994, 1996, and 1997 studies; (3) the impact, if any, that air traffic projections developed by other sources would have had on the results of the 1997 study; (4) actions FAA has taken to address safety concerns at the airport; and (5) FAA's plans to replace surveillance radars at airports with fewer total air traffic operations than the Cherry Capital Airport. GAO noted that: (1) FAA uses a multifaceted process to determine which airports should get surveillance radars; (2) this process includes completing a benefit-cost study, assessing an airport's need for a surveillance radar compared with the needs of other airports, and determining the availability of radar equipment or funds to purchase needed radar equipment; (3) in its 1994 benefit-cost study for the Cherry Capital Airport, FAA officials overstated the projected air traffic growth; (4) this overstated growth was the primary reason FAA concluded that the airport met its cost-effectiveness criteria; (5) moreover, in 1994, FAA officials did not follow the agency's decisionmaking process and prematurely concluded that the Cherry Capital Airport qualified for a surveillance radar; (6) when conducting the 1994, 1996, and 1997 benefit-cost studies, FAA considered the potential efficiency and safety benefits; (7) with the higher growth rate used in the 1994 study, the benefits exceeded the costs of installing a surveillance radar, so the Cherry Capital Airport met FAA's cost-effectiveness criteria; but with the lower growth rate used in the 1996 and 1997 studies, it did not qualify; (8) the air traffic projections were the most critical factors influencing the results of FAA's benefit-cost studies; (9) to address the safety concerns, FAA installed an automated display and information system at the Cherry Capital Airport in 1997; (10) while the controllers told GAO that the equipment can help them better manage air traffic and improve safety, they have difficulty using it because information on aircraft identification and altitude is sometimes unreadable on the display monitor; (11) beginning in 1999, FAA plans to replace the existing surveillance radars installed in the 1960s and 1970s at 101 airports as part of its efforts to modernize its air traffic control system; (12) seventy-five of the 101 airports had fewer total air traffic operations in 1996 than the Cherry Capital Airport did; (13) although FAA conducts benefit-cost studies and uses air traffic operations as a basis for determining the cost-effectiveness of installing surveillance radars at airports, agency officials did not conduct similar studies to determine whether it would be cost-effective to replace existing radars at the 101 airports or to prioritize the replacement of the radars; and (14) FAA has no plans to undertake such efforts because agency officials believe that it would be very difficult to discontinue radar operations at an airport because of the public's perception that safety would be reduced.
Individuals often first learn about vehicle donation programs through advertisements. Vehicle donation advertisements can be found on billboards, truck banners, and television, as well as in newsletters and even on small paper bags. Some of the most common mediums for vehicle donation advertisements include the radio, newspapers, and the Internet. Based on a sample of advertisements we reviewed, we found that advertisements for vehicle donations often identified that individuals could claim tax deductions for the donations, the donations served charitable purposes, and the donors’ vehicles would be towed free of charge. Figure 1 identifies the most common claims made in the newspaper, radio, and Internet advertisements we reviewed. IRS has expressed concern about some vehicle donation advertisements. According to an official from IRS’s Tax Exempt Division, tax deduction claims are potentially deceptive when they do not specify that taxpayers must itemize their deductions to claim a vehicle donation, since many taxpayers do not itemize. Of the 147 advertisements we reviewed, 117 identified that taxpayers could claim a tax deduction, but only 7 advertisements specified that donors must itemize in order to claim a deduction. IRS also expressed concern when advertisements claim donors can value their vehicles at full, or maximum, market value when claiming a tax deduction. IRS does not define full or maximum value, but believes these claims may be misleading since vehicles are required to be valued at fair market value. IRS stated that these advertisements may be particularly misleading when they also claim that vehicles will be accepted whether they are running or not. Fair market value equals what a vehicle would sell for on the market, and takes into account a vehicle’s condition and mileage, among other factors. Of the 117 advertisements we reviewed that mention tax deductions, 38 specified that donors could claim fair market value on their tax returns when donating their vehicles; while 8 identified that a donor could claim full or maximum market value. Other advertisements referred potential donors to the IRS Web site, an accountant, used car guides such as the Kelley Blue Book, or other sources for guidance on claiming a tax deduction. After deciding to donate a vehicle to charity, a donor will generally encounter one of two types of vehicle donation programs: those operated by charities (in-house) and those operated by a for-profit or not-for profit fund-raiser (fund-raiser). Donors may not know whether they are donating vehicles directly to charities or through fund-raisers. Figure 2 identifies the vehicle donation process for both in-house and fund-raiser vehicle donation programs. For in-house programs, charities, typically larger ones, advertise for vehicle donations, and respond to donor’s initial call inquiring about a donation. After the charity determines that it will accept the vehicle, it arranges to have the vehicle picked up, often towed, and delivered to wherever it will be stored until it is liquidated. The charity provides the donor with a receipt when the vehicle is picked up, or at a later time to document the donation for tax purposes. At the time the vehicle is picked up, the charity obtains the title of the vehicle from the donor, and some charities may provide donors with state-required forms (e.g., release of liability) or references for establishing the tax deductible value of their donated vehicles (e.g., Kelley Blue Book or IRS guidance). Charities we spoke with stated that it is up to the donor to establish the vehicle’s value. Once the donated vehicles are collected, they are generally sold at auto auctions or salvaged for parts, but may also be sold to auto dealers or to the general public. Charities with in- house programs keep 100 percent of the net proceeds after deducting costs associated with processing the vehicles. For fund-raiser programs, fund-raisers generally perform some or all of the tasks associated with advertising, vehicle pick up, and vehicle disposal. After deducting expenses, fund-raisers keep a portion of the net proceeds from the vehicle sale or salvage, providing the remainder of the proceeds to the specified charity. A charity working with a fund-raiser may have no oversight of the process, leaving the operation of the program, and distribution of proceeds, up to the fund-raiser. The relationship between charities and fund-raisers varies, depending on the agreements they have established. Some commercial fund-raisers may handle vehicle donation programs for many charities. For example, one national fund-raiser has contracts with about 140 charities, and another works with about 200 charities. Charities may also contract with multiple fund-raisers. Fund-raisers often support smaller charities that would not otherwise be able to participate in vehicle donation programs. For example, at one California charity, a staff person spent half her time working with two vehicle donation fund-raisers, which together generated about $110,000 for the first six months of the current year (approximately 8 - 10 percent of its annual budget). In addition to the in-house and fund-raiser programs described above, we identified some variations in how vehicle donation programs operate. For example, see the following: Some charities refurbish donated vehicles for their own program services or clients, rather than for sale or salvage. One state consortium of 14 charities jointly runs a vehicle donation program in conjunction with a wrecking yard. The charities share in oversight of the operations, such as inspecting donated vehicles and monitoring vehicle donation reports. Donors can select one charity to receive the proceeds, or proceeds are split among members of the consortium equally if no charity is designated. One large charity runs a national vehicle donation program and serves regional offices as a fund-raiser would, charging its regions vehicle processing costs. Some of the charity’s affiliates choose other fund- raisers and do not participate in the national program. Another large charity runs a national program and serves charity affiliates but also has a nonprofit vehicle donation program for other smaller charities. The total proceeds a charity receives from a vehicle donation may be less than what a donor expects. We identified two factors that contribute to this difference. First, charities and fund-raisers often sell vehicles at auto auctions for wholesale or liquidation prices or to salvage yards for parts, rather than obtaining the amount they would receive if vehicles were sold to private parties. Second, vehicle processing and fund-raising costs are subtracted from vehicle revenue, further lowering proceeds. According to a 2001 survey of charitable donors commissioned by the Wise Giving Alliance, donors expect at least 70 to 80 percent of a charity’s funds to be used for charitable purposes rather than fund-raising or administrative costs. Actual charity receipts reported to state officials for charity fund-raising are less. For example, in New York telemarketing fund-raisers (not specifically vehicle donations) returned 32 percent of funds raised for charities in 2000. Although donors are often motivated by serving a charitable cause when donating their vehicle, the results of donor surveys identified that individuals are also motivated by the ability to claim a tax deduction and to dispose of an unwanted vehicle. Figure 3 provides an example of the amount a charity received from an actual vehicle donation. In this case, a 1983 truck was donated in 2001 to a charity whose vehicle donation program is operated through a fund-raiser. The gross sale price for the truck (sold at an auction) was $375. After deducting fund-raiser and advertising expenses, net proceeds totaled $63.00. This amount was divided evenly between the fund-raiser and charity, leaving the charity with $31.50 from the vehicle donation. The donor claimed a deduction of $2,400 on his or her tax return, based on the fair market value of the vehicle as identified in a used car guidebook. Charities operating in-house vehicle donation programs incur costs associated with processing vehicles for sale or salvage, but do not incur additional fees generally associated with fund-raiser programs. Processing costs cannot be compared among in-house programs because charities may record their costs differently. One of the few in-house charities we spoke with reported that it earned a net average of 42 to 44 percent of the sales price of donated vehicles. Another charity operating a national program for local affiliates reported a range of 13 to 32 percent net proceeds for programs operating for over 2 years, and a deficit to slightly in excess of breakeven for newer programs. Proceeds received by charities participating in vehicle donation programs run by fund-raisers also varied, in part due to the different processing costs deducted by fund-raisers, as well as different agreements between charities and fund-raisers for splitting net proceeds. Some charities receive a percentage of the net proceeds, after the fund-raisers costs are deducted. Other charities receive the net proceeds remaining after the fund-raiser deducts a flat fee for expenses. California is the only state that systematically captures information on the percentage of proceeds received by charities through vehicle donation programs. However, California only captures information related to programs run by fund-raisers, and cannot separately identify the number of charities that operate in-house programs. According to a report from the California State Attorney General’s Office, less than 1 percent of registered charities in California have vehicle donation programs that are managed by commercial fund-raisers. In 2000, these fund-raisers generated approximately $36.8 million in sales revenue, with about $11.3 million (31 percent on average) being returned to the charities. As shown in figure 4, California charities received proceeds from fund-raiser programs ranging from less than 20 percent to over 80 percent of the net proceeds from vehicles, but most were in the 40 – 59 percent range. Issues relating to charity proceeds from fund-raising reached the Supreme Court on March 3, 2003, in arguments related to “Ryan v. Telemarketing Associates”. The Attorney General of Illinois is appealing a decision of the Illinois Supreme Court to dismiss fraud charges against Telemarketing Associates. At issue were solicitations implying that cash donations would go to a charity to buy food baskets and blankets for needy veterans, while only 15 percent of the funds raised actually went to the charity. As part of the case, donor affidavits were reviewed stating that some individuals would not have donated if they knew the percentage of proceeds the charity would actually receive. The Supreme Court has ruled in three previous cases that percentage-based limitations on charitable solicitations were unconstitutional. The Supreme Court decision in this case is not expected until July 2003. We plan to conduct a national survey of charities to further review vehicle donation proceeds received by charities and fund-raisers. We will identify any concerns regarding the amount of net proceeds fund-raisers keep from vehicle donations and the significance of vehicle donation programs to charity operations. Charities may consider proceeds from vehicle donations to be a welcomed, if not crucial, source of revenue to support their operations. For example, one charity stated that vehicle donations are “just keeping their heads above water.” The results of donor surveys we reviewed indicated that the ability to claim a tax deduction is one of the most important reasons individuals donate vehicles to charity. However, we found that a small percentage of Americans claim tax deductions for vehicle donations. Specifically, we reviewed a representative sample of taxpayer returns that claimed noncash contributions for the tax year 2000. Of the 129 million returns filed that year, a projected 0.6 percent, or an estimated 733,000 returns, had tax deductions for vehicle donations. We also found that deductions for vehicle donations accounted for a small fraction of forgone tax revenue. Based on the sample we reviewed, vehicle donation deductions totaled an estimated $2.5 billion of the $47 billion in noncash contributions claimed. Stocks and thrift store donations accounted for most of the tax dollars deducted for noncash charitable contributions. We estimate that in 2000, vehicle donations deductions lowered taxpayers’ income tax liability by an estimated $654 million of the $1 trillion tax liability reported on returns. IRS guidance limits the amount of an allowable deduction to the vehicle’s fair market value, or the amount a willing, knowledgeable buyer would pay for the vehicle. We reviewed each deduction for vehicle donations in our sample to determine the average value claimed for donated vehicles in 2000, and whether these values fell within the ranges identified in a nationally recognized blue book. We estimated that the average value claimed for donated vehicles in 2000 was $3,370, and that the amounts claimed for almost all of these vehicles fell within the blue book. However, since we did not have additional information regarding the vehicles’ condition and mileage, we could not determine whether reported values accurately reflected fair market value. For a donor to claim a vehicle tax deduction, the contribution must be made to a qualified organization. Churches and most nonprofit charitable, educational, and medical organizations are qualified. We submitted the names of charities from our sample that taxpayers reported on their returns to IRS to verify whether the recipient organization was qualified to receive tax deductible donations. Of the 22 charities IRS reviewed, it was able to verify that 10 of the charities were qualified to receive tax- deductible donations. IRS could not determine whether the remaining 12 charities were qualified organizations because it needed more information than taxpayers reported on their tax returns, such as the organizations’ full names and addresses and employer identification numbers. IRS has a compliance program to review noncash donations, including vehicle donations generating revenue over $5,000, which compares the amounts received by a charity upon the sale of a donated item with the amount claimed by the taxpayer as the fair market value of the item. Although differences exist between fair market values and the proceeds from items sold at wholesale prices, this program gives IRS an indication of whether a particular donation should be further scrutinized. However, IRS has no data identifying whether cases referred for further review by this program are ever pursued. IRS is also in the process of implementing a National Research Program, which may provide data on compliance issues dealing with vehicle donations and other noncash contributions. Under the program, officials will randomly select about 49,000 tax year 2001 returns to determine whether taxpayers complied with statutory income, expense, and tax reporting requirements. Returns with noncash contributions, including donated vehicles, could be subject to audit to verify donation claims. Once this project is completed, IRS plans to assess individuals’ compliance related to deductions for noncash contributions, and determine whether more enforcement is needed to help ensure proper reporting in this area. IRS and other organizations, including the National Association of State Charity Officials and the Better Business Bureau, have issued guidance on steps potential donors should take before donating their vehicles to charity and claiming associated tax deductions. These steps include the following: Verify that the recipient organization is a tax-exempt charity. Potential donors can search IRS’s Publication 78, which is an annual cumulative list of most organizations that are qualified to receive deductible contributions. Determine whether the charity is properly registered with the state government agency that regulates charities. The state regulatory agency is generally the state attorney general’s office or the secretary of state. Ask questions about how the donated vehicle will be used to determine whether it will be used as intended. Such questions include the following: Will the vehicle be fixed up and given to the needy? Will it be resold, and if so, what share of the proceeds will the charity receive? Itemize deductions in order to receive a tax benefit from the donation. The decision to itemize is determined by whether total itemized deductions are greater than the standard deduction. Deduct only the fair market value of the vehicle. The fair market value takes into account many factors, including the vehicle’s condition, and can be substantially different from the blue book value. IRS Publication 526, “Charitable Deductions,” and IRS Publication 561, “Determining the Value of Donated Property,” provide instructions on how to calculate the fair market value of donated property. Document the charitable contribution deduction. IRS Publication 526 identifies requirements for the types of receipts taxpayers must obtain and the forms they must file. Follow state law regarding the car title and license plates. Generally, the donor should ensure that the title of the vehicle is transferred to the charity’s name, by contacting the state department of motor vehicles, and keep a copy of the title transfer. Donors are also advised to remove the license plates, if allowed by the state. The IRS and the states have identified few significant occurrences of abuse by charities and fund-raisers operating vehicle donation programs. However, the guidance above may help potential donors avoid donating vehicles to organizations that have not complied with laws or regulations related to vehicle donation activities, and prevent problems sometimes encountered with vehicle title transfers. For example, see the following: IRS revoked the charity status for one Florida organization that solicited boat donations after finding that its charitable activities were insubstantial, and that proceeds were kept for personal gain.
According to the Internal Revenue Service (IRS), charities are increasingly turning to vehicle donation programs as a fund- raising activity, resulting in increased solicitations for donated vehicles. Therefore, to make informed decisions about donating their vehicles, taxpayers should be aware of how vehicle donation programs operate, the role of fund- raisers and charities in the vehicle donation process, and IRS rules and regulations regarding allowable tax deductions. Due to the increased use of vehicle donation programs, GAO was asked to describe (1) the vehicle donation process, (2) the amount of proceeds received by charities and fund-raisers, (3) donor tax deductions, and (4) taxpayer cautions and guidance. Revenue from donated vehicles is a welcomed, and sometimes crucial, source of income for a number of charities. Donors, by following available guidance and making careful selection of charities for their donations, can provide charity support while benefiting themselves through tax deductions or disposing of unwanted vehicles. Taxpayers generally first learn about vehicle donation programs through advertisements. Interested donors call the advertised number and either reach a charity that operates its program in-house, or a third-party fund-raiser acting on the charity's behalf. The charity or fund-raiser asks questions of the potential donor regarding the vehicle, and then collects and sells the vehicle for proceeds. The proceeds a charity receives from a vehicle donation may be less than what a donor expects. Two factors contribute to this difference. First, charities often sell vehicles at auto auctions for wholesale prices rather than the prices donors may receive if they sold their vehicles themselves. Second, vehicle processing costs--whether the charity's or the fund-raiser's--- as well as the fund-raiser's portion of net proceeds further reduces the amount of proceeds a charity receives. Of the 129 million individual returns filed for tax year 2000, an estimated 733,000 returns had tax deductions for vehicle donations that lowered taxpayers' tax liability by an estimated $654 million. No data exist on whether these deductions were appropriately claimed. To assist donors in making decisions regarding vehicle donations, IRS and other organizations have issued guidance on steps potential donors should take before making vehicle donations. These steps include verifying that the recipient organization is tax-exempt, asking questions about vehicle donation proceeds, and deducting only the fair market value of the vehicle on tax returns.
FEMA assists with providing a large range of services for disaster victims, including mass care (such as food and emergency medical care) in the immediate aftermath of disasters. FEMA also pays for temporary housing and crisis counseling for eligible victims. Authorized by section 408 of the Stafford Act, as amended, FEMA’s temporary housing grants cover the costs of renting alternate housing when victims’ primary predisaster residence is rendered uninhabitable or inaccessible, and/or quickly repairing damages to make the residence habitable. Until they receive such assistance, disaster victims may be forced to stay with friends or relatives or in temporary mass care shelters. The intent of the assistance is to get victims out of mass care shelters or other temporary dwellings—not to restore their residence to its predisaster condition. (Federal assistance for permanent restoration generally comes in the form of a Small Business Administration disaster loan.) A FEMA inspector typically visits each applicant’s residence, confirms whether or not it is uninhabitable or inaccessible, and obtains insurance information and documentation verifying that the dwelling is the applicant’s primary residence. Applicants whose residence is in need of repairs costing less than $100 are not eligible; the maximum grant amount is $10,000. The Fast Track process differed from the regular temporary housing assistance process in that for applications from certain designated ZIP code areas, the physical inspection of the applicant’s residence and the determination of eligibility were made after FEMA issued a check to the applicant. For the Northridge earthquake, FEMA utilized earthquake shaking intensities as criteria for designating certain geographic areas as eligible for Fast Track housing assistance. FEMA used the “Modified Mercalli Intensity” (MMI) scale, which measures the intensity of earthquake shaking on a scale of 1 to 12—the more severe the shaking, the higher the number. The most severe shaking in Northridge was at level 10; FEMA decided to use the Fast Track process for applicants residing in each ZIP code area with an MMI level of 8 or above. The degree of damage associated with this level includes the partial collapse of ordinary-quality masonry; the fall of chimneys, factory stacks, monuments, towers, and elevated tanks; and the movement of frame houses on their foundation if not bolted down. (A description of MMI intensity levels is in app. II.) FEMA officials selected a total of 68 ZIP codes to designate as eligible geographic areas with MMI readings of level 8 or higher. This designation covered an approximately 40-by-40 mile area from Santa Monica and Burbank westward into Simi Valley. FEMA initiated the Fast Track process for Northridge victims in the 68 designated ZIP code areas on January 23, 1994; limited it to applicants in only three ZIP code areas on February 3, on the basis of an analysis of the degree of damage reported by field inspectors and the temporary housing applications received; and discontinued it altogether on April 7. About 47,000 housing assistance applicants—out of about 409,000—received a check under the Fast Track process. Prior to the Northridge earthquake, FEMA used the Fast Track process for only one disaster—Hurricane Andrew in 1992. As with Northridge, FEMA used the Fast Track process for applicants in ZIP code areas believed to have sustained the greatest damage. As authorized by section 416 of the Stafford Act, FEMA provides funding for professional counseling services for disaster workers and victims. Individuals are eligible for crisis-counseling services if they were residents of the designated disaster area or were located in the area at the time of the disaster and are experiencing mental health problems caused or aggravated by the disaster. States must apply for crisis-counseling funds. The magnitude of need is based primarily on a formula that takes into account such factors as the numbers of fatalities, injuries, homes destroyed or damaged, and unemployment resulting from the disaster. FEMA makes the funds available to the Center for Mental Health Services (CMHS), which awards grants to applicant states (typically to the state’s department of mental health). The state, in turn, disburses funds to local governments, which fund the activities of private organizations actually providing the counseling services. In the case of the Northridge earthquake, California’s Department of Mental Health was the grantee, while Los Angeles and Ventura counties contracted with 51 service providers and oversaw their day-to-day activity. Crisis-counseling grants totaled $36 million, of which $32 million was actually expended. “take such other action, consistent with authority delegated to him by the President, and consistent with the provision of this Act, as he may deem necessary to assist local citizens and public officials in promptly obtaining assistance . . . .” The statute describes those persons eligible and the circumstances under which they are eligible to receive temporary housing aid. Under section 408(a), FEMA may help those “persons who, as a result of a major disaster, require temporary housing.” (42 U.S.C. § 5174(a)(1)(A)). Assistance can be provided for up to 18 months from the time of the disaster declaration unless an extension is granted because of extraordinary circumstances. (42 U.S.C § 5174(a)(3)). No statutory provision, however, requires that FEMA verify that the applicants have met all relevant conditions of eligibility prior to providing temporary housing assistance. FEMA has the discretion under the Stafford Act to set the methods it will use to verify eligibility. Thus, we agree with FEMA that it has the authority under the Stafford Act to implement the Fast Track process. FEMA also noted that the purpose of the temporary housing regulations is to assist “the greatest number of people in the shortest possible time.” (44 C.F.R. § 206.101(b)). In the case of Northridge, FEMA concluded that the Fast Track process was essential to meet the needs of disaster victims expeditiously. The enormous number of disaster victims and their psychological and physical need for immediate assistance provided the rationale for implementing the Fast Track process. After the earthquake, FEMA’s on-site disaster application centers and teleregistration center were overwhelmed by the unprecedented number of applicants. Because the application centers received more applicants than could be accommodated, FEMA gave applicants appointments to come back at a later date. Even so, by the end of the first month after the disaster, nearly 360,000 applications had been filed, and the backlog of housing inspections had grown to about 189,000 residences. Within the first week of the January 17, 1994 disaster, over 27,000 disaster victims were living in or outside of shelters, and appointments to submit applications for assistance were not available until mid-March. Police intervention was required at application centers to help contain unruly crowds. On January 21, 4 days after the disaster, the President visited the disaster scene and, noting the long lines of applicants, decided that the situation was unacceptable. As a result, FEMA instituted the Fast Track process to provide residents with checks quickly so they could find better accommodations. In implementing the Fast Track process following the Northridge earthquake, FEMA experienced operational difficulties, including the inconsistent application of criteria when designating areas with the greatest estimated damage and constraints in its application processing software. These difficulties, combined with the logistical challenge of processing an enormous volume of applications for assistance, as well as FEMA’s decisions on the eligibility of housing assistance under both the regular and Fast Track processes, may have contributed to FEMA’s providing housing assistance in excess of actual needs. The decision to use the Fast Track process is ultimately a subjective judgment—specifically, that the benefit of rushing aid to certain disaster victims outweighs the risk of disbursing funds to ineligible recipients or in excess of recipients’ needs. Hence, a large-scale future disaster could lead FEMA to use a Fast Track approach again. FEMA has not developed written guidance for implementing the Fast Track process, even though FEMA’s Inspector General recommended establishing formal procedures after its first use in 1992. Furthermore, FEMA officials acknowledge that the guidance for the temporary housing assistance program needs revision. Well-planned and well-documented guidance could help FEMA avoid operational difficulties in implementing a future Fast Track process and help avoid ineligible payments. One of the first implementation tasks facing FEMA was designating the areas whose inhabitants would be “Fast Tracked.” There were no preexisting criteria for FEMA to draw on. FEMA worked with the state of California and California Institute of Technology seismologists to develop MMI maps of the Northridge area. According to a FEMA official involved in identifying ZIP code areas, the process was undertaken on a “crash” basis, possibly resulting in some errors in the selection of ZIP codes. Our analysis shows that some ZIP code areas that met FEMA’s criteria were omitted (i.e., applications from residents of those areas were not processed under Fast Track) and vice versa. Because of these errors, not all Northridge victims in similar circumstances were treated the same. According to a FEMA official who was involved in the process, FEMA ultimately designated 68 ZIP code areas whose inhabitants’ applications for temporary housing assistance would be processed under Fast Track. We traced the 68 ZIP codes—which designated eligible geographic areas with MMI readings of 8 or higher—to an MMI map identical to the one used by FEMA officials. We found that 56 of the 68 ZIP codes met FEMA’s announced criteria—they were located in areas that had experienced earthquake shaking intensities of 8, 9, or 10 on the MMI scale. As shown on the map in figure 1, we also found the following: Nine of the 68 ZIP codes did not meet the criteria because they were located in areas that had experienced earthquake shaking intensities of less than 8 on the MMI scale. (These nine ZIP codes account for about 4 percent of the payments that FEMA designated for recovery.) Three of the designated ZIP codes did not appear on the map. Twelve ZIP codes that met FEMA’s criteria were not designated for the Fast Track process. Including ZIP codes that did not meet the criteria means that residents within those ZIP code areas inappropriately received Fast Track funding, and the reverse was likely true. Because we were unable to locate three of the designated ZIP codes on the map, we do not know whether they met the selection criteria. However, according to data compiled by FEMA’s OIG, no temporary housing assistance payments were made under the Fast Track process to applicants from these three ZIP code areas. According to a FEMA official who participated in the process, his notes suggest that the officials debated which MMI shaking intensities should be included— specifically, whether to include areas with an MMI level of 7. The official noted that some of the designation errors might have occurred because the final list of ZIP codes that was distributed to the federal certifying officers handling applications from Northridge victims was hand-written and therefore difficult to read. Data developed by FEMA’s Inspector General indicated that Fast Track payments were made to 110 ZIP code areas, as opposed to the 68 that FEMA designated. FEMA officials reviewed their records for a few of the ZIP codes and found that some of the discrepancy may be due to errors made in entering addresses into the database or that some recipients’ post-disaster mailing address was different from the address of the damaged residence. We analyzed the payments that FEMA made to ineligible disaster victims to determine the extent to which they might be attributable to the inclusion of ZIP codes that did not meet FEMA’s Fast Track criteria. We found that payments made to those ZIP code areas had a negligible effect on the ineligible payments, accounting for about 4 percent of the total amount. (App. IV provides a more detailed explanation of our analysis.) In accordance with the sequence of events under the regular temporary housing assistance program, FEMA’s automated system for processing applications required FEMA to enter the date of the inspection of an applicant’s residence. The date was required before the system would process the application further for the issuance of a check. Because FEMA intended that applicants under the Fast Track process receive a check before an inspection occurred, FEMA officials had to develop a way of overriding the automated system. While FEMA was able to accomplish this, the resulting records are not entirely reliable because of inconsistent data. Fictitious inspection dates were initially entered to circumvent the control, but the computer program was subsequently modified. Also, according to FEMA officials, personnel handled data entries in different ways; some made adjusting entries while others eliminated the initial entry. FEMA was able to overcome this operational problem in order to distribute checks to recipients. However, through improved planning in designing a system to accommodate the Fast Track process, such operational difficulties might be avoided without creating unreliable records. In commenting on a draft of this report, FEMA stated that the agency has been developing a new automated processing system that would include the ability to handle a Fast Track procedure. “largely because of the fast track system, multiple housing checks have been provided to individual household members. . . . In dealing with this situation, the following policies should apply: 1. The initial increment of assistance should be provided to all applicants without regard to their membership in a household. This decision is based on the need to treat all applicants in like situations similarly.” According to some FEMA officials involved in processing Northridge applications, the large volume of applications, combined with limitations in the capability of FEMA’s application-processing computer system, made it difficult, if not impossible, to search the applicant database for potential duplicate names and/or addresses. This may have contributed to situations in which more than one applicant per household received a check. While physical inspections may have identified—and thus prevented—duplication for applicants under the regular program, the inspections would not have done so for Fast Track applicants because inspections of their residences were performed after they received checks. A memorandum laying out general procedures for the Fast Track process from the Northridge Human Services Officer to all certifying officers stated that disbursements would be based on the applicant’s letter and ZIP code and that as a result, items such as proper name spelling, address, fair market rent, and ownership could not be determined until the inspection was completed. “This is in response to your application for FEMA disaster housing assistance. By cashing the enclosed check, you are confirming that the information is true and correct and are agreeing to use these funds only to meet your disaster-related emergency housing needs, rent for alternative housing, or repairs to your home. You will soon receive a letter from FEMA with more specific information concerning this assistance.” The physical inspection of a Fast Track applicant’s residence could have indicated that repairs costing less than $3,400 could have made the residence habitable. According to FEMA officials, such applicants were allowed to keep the full amount even if it was more than the cost of repairs. Because it did not seek the recovery of amounts exceeding the estimated costs of repairs needed to make a residence habitable, FEMA potentially provided some Fast Track applicants with payments in excess of their needs. “some people receiving housing checks mistakenly believe that they are not eligible for housing assistance because they’re still able to live in their homes. ’In many cases the housing checks which applicants receive can be used to repair quake damage, including damage to chimneys, windows, doors and walls, even though the applicants weren’t forced to move out of the home . . . .’” In a 1996 report on FEMA’s housing program, the Inspector General reported that FEMA had also not limited temporary housing assistance to applicants with uninhabitable residences in other disasters. The Inspector General concluded that FEMA was using the temporary housing assistance program in a manner inconsistent with the Stafford Act. Specifically, the Inspector General found that rather than make a habitability determination for damaged residences, FEMA “accepts damages over $100 as evidence of an uninhabitable house,” and that FEMA was also paying for repairs apparently not related to making the residence habitable, such as carpet replacement, rain gutters, drywall finishing, wall tiles, and paint. The Inspector General recommended that in the future, FEMA limit grants to uninhabitable housing and for only those repairs necessary to make the housing habitable. According to a FEMA official, the agency has adopted this recommendation. In the case of the Northridge earthquake, FEMA provided 408,663 applicants with $1.2 billion in housing assistance. In applying for federal assistance shortly after the earthquake, Los Angeles and Ventura counties reported a combined total of 9,919 housing units destroyed, 15,096 suffering major damage, and 29,927 suffering minor damage, for a total of 54,968 residences suffering minor damage or worse. These numbers were based on preliminary assessments. However, in a January 1995 report, the Department of Housing and Urban Development (HUD) stated that a total of 308,900 units of housing were damaged by the Northridge earthquake; presumably, the damage in many cases did not render the residences uninhabitable. Because of limitations in FEMA’s computerized database, we were unable to determine the frequency of the various deviations from normal policy discussed above or the role they played in the apparent discrepancy between housing grants and damaged housing units. FEMA program officials explained that it is difficult to determine when a residence has sustained enough damage to be uninhabitable and that the decisions are subjective. They suggested that FEMA probably tended to err on the liberal side, rather than risk denying aid to someone who needed it, when damages of as little as $100 can be eligible under current policy. FEMA’s basic policy and procedures guidance for the regular temporary housing assistance program—FEMA Instruction 8620.11—does not address the Fast Track process. FEMA’s Inspector General recommended establishing formal procedures for Fast Track after its first use in 1992. Additionally, FEMA officials acknowledge that the May 12, 1987, guidance needs revision and is sometimes modified in actual practice. Well-planned and well-documented guidance could help FEMA avoid operational difficulties in implementing a future Fast Track process and help avoid ineligible payments. FEMA’s Office of Inspector General reviewed FEMA’s experience with Fast Track after Hurricane Andrew. At that time, the Inspector General recommended that FEMA develop formal procedures for the Fast Track process. The recommendations included actions that would help implement the Fast Track process and minimize the loss of federal funds through overpayments. Specifically, the Inspector General recommended that FEMA develop a Fast Track method with appropriate controls and limit grants to 1 month’s rental assistance. (In the wake of Hurricane Andrew, owners had received 4 months’ assistance under Fast Track, and renters received 3 months’ assistance.) FEMA officials could not explain why the Inspector General’s recommendations were not implemented. We note that because of major reorganizations and personnel reassignments that took place between Hurricane Andrew and the Northridge earthquake, many of FEMA’s program staff who worked on Fast Track at Northridge were not involved in the housing program at the time of Hurricane Andrew and were likely unaware of the Inspector General’s recommendations. Also, several FEMA officials had concerns about Fast Track’s vulnerability to fraud, waste, and abuse; hence, formalizing guidance for the process may not have been a priority because of the uncertainty about its future use. Several FEMA program officials expressed concern that reducing Fast Track payments to increments of 1 month’s rental assistance—as recommended by the Inspector General—could increase FEMA’s administrative burden and congestion at the Disaster Application Centers, a major concern at Northridge. In the absence of preexisting guidance, officials implementing the Fast Track process after the Northridge earthquake developed guidance on an ad hoc basis, issuing several memorandums detailing how the process would be implemented. Memorandums included information on the amount of rental assistance to be provided, the designated ZIP code areas, the modification of the computerized database to accommodate Fast Track, and the handling of appeals and recertifications (the provision of additional assistance to applicants beyond the initial time period). We believe that if FEMA had followed the Inspector General’s recommendations and developed written guidance for the Fast Track process, some of the operational difficulties experienced following the Northridge earthquake may have been avoided. For example, FEMA might have identified and mitigated limitations in its application-processing software or developed criteria for designating the areas for which the Fast Track process might be used following different kinds of large-scale disasters. Preexisting guidance would avoid the need to develop ad hoc guidance in the crisis atmosphere that inevitably follows a large-scale disaster. A principal advantage of the Fast Track process is that it hastens the distribution of temporary housing assistance grants to some applicants, facilitating a move into alternate housing more quickly than would the regular process. Also, according to FEMA officials involved in the response to the Northridge earthquake, Fast Track provided an intangible benefit by demonstrating to the victims and the general public that help was actually on the way. A principal disadvantage is the relative loss of control over the disbursement of federal funds and the subsequent need to recover ineligible payments. FEMA determined that it should recover about $9.6 million in Fast Track payments made to 3,856 Northridge earthquake recipients. As of September 1997, FEMA had recovered $4 million, and recovery efforts were under way for most of the rest. The obvious benefit of implementing Fast Track is its potential to provide assistance for those victims most in need as quickly as possible—more quickly than would be the case under the regular process. While it is difficult, 3 years after the event, to assess how much Fast Track helped disaster victims, FEMA program officials estimate that because of the catastrophic nature of the Northridge disaster, applicants would have received their checks several months later without the Fast Track process. A primary bottleneck in the regular housing assistance process was physical inspections. As of mid-February 1994, nearly 1,400 inspectors were inspecting approximately 8,000 residences a day; in spite of this, the backlog of inspections grew steadily, from 94,000 on February 7 to a peak of 189,000 on February 13. Fast Track applicants did not have to wait for FEMA to inspect their residences prior to receiving housing assistance checks. We were unable to determine—and therefore to compare—the average lengths of time actually taken to provide Northridge applicants with temporary housing assistance under either the Fast Track or regular process, because FEMA’s data systems cannot readily provide information on the average length of time taken to provide temporary housing assistance and because, according to FEMA officials, the accuracy of the database is questionable. According to a FEMA analysis of past large disasters—in which the regular process was used exclusively—the average time between a disaster victim’s application and the Treasury’s mailing of a temporary housing assistance check was 21 days, as follows: Application taken and mailed to FEMA’s processing center for processing—2 days. Application electronically transmitted to inspector and inspection made—9 days. Processing center makes eligibility determination—2 days. FEMA requests check issuance from Treasury Department; check is prepared and mailed—8 days. According to FEMA’s analysis, this time could be reduced to an average of 10 days for Fast Track applicants because the inspection (usually requiring an average of 9 days) and the normal eligibility determination (usually requiring 2 days) would be performed after the check was issued—thus saving 11 of the 21 days. However, this analysis may not be comparable to the Northridge earthquake or other extraordinarily large disasters. The sheer volume of temporary housing assistance applications resulting from the Northridge earthquake dramatically exceeded any previous disaster. In the absence of the Fast Track process, this large volume could have caused the average time period for Northridge applicants to take more than 21 days; if so, then the time savings attributable to Fast Track would be even larger. When scheduling inspections, FEMA did not distinguish between applications from victims that had already received a check under the Fast Track process and victims who had not. Because the non-Fast Track applicants had to wait for the inspections of their residences before receiving assistance, the Fast Track process did not shorten (or lengthen) the time between the application and receipt of funds for these applicants. Most of the FEMA officials contacted for this review stated that expedited check issuance was not the primary benefit of Fast Track. Rather, they cited the intangible benefits of assuring shaken disaster victims that help was forthcoming and helping dissipate the threat of unruly crowds at disaster application centers. According to the officials, the Fast Track process enabled FEMA to tell victims and the media that checks were being issued and sent—not that applications were simply being processed by a government bureaucracy. We did not talk directly to any of the Northridge earthquake victims to identify the process’s advantages and disadvantages partly because of the time lapse since they received assistance and their potential inability to know whether they had been “Fast Tracked.” However, a FEMA customer survey after the earthquake found a general sense of satisfaction with the agency’s overall disaster response. Most respondents (63 percent) felt that FEMA should have been able to get a check to them within 2 weeks, but two-thirds of those felt that a check received during the second week was sufficient. Seventy-four percent expressed satisfaction with how quickly they received assistance. Slightly over half the respondents (56 percent) felt that the amount of housing assistance they received was insufficient, 40 percent thought it was just right, and 4 percent said it was more than enough. The primary concern with the Fast Track process cited by FEMA officials is the knowledge that some funding will be disbursed to ineligible recipients, thus requiring subsequent recovery efforts. FEMA’s follow-up report on the Northridge earthquake noted the trade-off between the cost of debt collection and the benefits of expedited assistance. FEMA ultimately designated for recovery 6.7 percent ($9.6 million of $143 million) of the temporary housing assistance provided under the Fast Track process for 3,856 Northridge earthquake applicants. This figure excludes some ineligible payments made to disaster victims who voluntarily returned the funds. (Because of limitations in its information systems, FEMA could not readily provide the amount of payments voluntarily returned.) However, as noted above, the Fast Track process contributed to FEMA’s decision not to seek recovery of some payments that normally would have been recovered. Therefore, a smaller proportion of Northridge temporary housing assistance payments—including Fast Track payments—were designated for recovery than otherwise. FEMA identified three major reasons for recovering payments to ineligible recipients: (1) damage to residences was insufficient to qualify them for assistance, (2) the payee received duplicate damage reimbursements from insurance payments, and (3) the damaged residence was not the recipient’s primary residence. The extent to which an applicant is found to be ineligible generally appears as narrative from the inspector on the inspection form, such as a comment that the damage was insufficient to make the residence uninhabitable or that the applicant’s damages were covered by insurance. Other ineligible applicants may be found during the processing of the application, such as duplicate applications from the same individual or duplicate applications for the same residence. FEMA’s National Processing Services Center, which handles assistance to applicants, begins the recovery process by sending an ineligible recipient three letters—one every 30 days—requesting the return of disaster funding. If there is no response from the recipient, the case is referred to FEMA’s Disaster Finance Center, where penalties and interest begin to accrue on the debt and three additional letters are sent over a period of 4 months. Subsequently, the cases are turned over to a collection agency and the Treasury Department. Nearly all currently overdue Fast Track payments from the Northridge disaster designated for recovery have reached this point. The Treasury Department then begins garnishing the debt from the recipient’s federal payments (e.g., social security checks, income tax refunds, etc.). Table 1 shows the status of FEMA’s efforts to recover the funds as of September 11, 1997. It should be noted that while the data in table 1 reflect only those payments made under the Fast Track process, the status of the funds may not reflect the fact that they were made under the Fast Track process. For example, the table includes some payments designated for recovery because the recipient later received insurance proceeds for the same needs. Such payments were made under both the regular housing assistance and Fast Track processes and designated for recovery regardless of whether they were made under the Fast Track process. Furthermore, the above figures represent only those recoveries made after the cases were turned over to the Disaster Finance Center for collection. As noted above, some recipients voluntarily returned payments; hence, there was no need for the Disaster Finance Center’s involvement. FEMA advised us that because the payment data in its database are unreliable, it could not provide reliable information on the amounts returned voluntarily. FEMA officials were reluctant to estimate the likelihood of additional recoveries because they have so little experience with the newly revised federal recovery process. Prior to Northridge, each FEMA region handled its own recovery efforts. At about the same time as Northridge, the recovery process was centralized at FEMA’s Disaster Finance Center near Berryville, Virginia. Also, until recently, FEMA referred its uncollectible debts to the Internal Revenue Service only. Now they are referred to both the Department of the Treasury for offset and the Department of Justice for possible prosecution. FEMA officials said they do not yet have enough experience on the Treasury Department’s success rate under the new procedures and are also still in the process of learning what type of information the Justice Department needs before it feels it has a prosecutable case. FEMA officials pointed out that it takes some time before the Processing Center concludes that payments designated for recovery are bad debts and turns them over to the Finance Center. Additionally, some time was probably lost in transferring the collection responsibility from FEMA’s regional offices to the Finance Center. Also, until recently, cases were referred to the Internal Revenue Service only once a year, and it would take upwards of another year before there was a tax return to apply the debt against. FEMA provides crisis-counseling funding for screening and diagnosing individuals, short-term crisis counseling, community outreach, consultation, and education services. To receive grants, states must demonstrate that existing state and local resources are inadequate and provide estimates of the number of individuals affected, the types of assistance needed, and their estimated costs. There are two crisis-counseling programs—the immediate services program and the regular program. For approved applications under the immediate services program, the FEMA Regional Director or designee makes funds available to the state for disbursement to its department of mental health. Under the regular program, after approval, funds are transferred from FEMA’s headquarters to CMHS for distribution through the grants management process. While FEMA participates in site visits to service providers, agency officials said that FEMA relies largely on CMHS and the states (the grantees) to ensure that crisis-counseling funds are used and accounted for appropriately. Detailed periodic and final reports on activities and costs are submitted to CMHS and FEMA. For the distribution of funds provided after the Northridge earthquake, FEMA officials said that they visited all service providers, and CMHS officials evaluated the providers’ accounting procedures and controls and found them to be satisfactory. “The President is authorized to provide professional counseling services, including financial assistance to State or local agencies or private mental health organizations to provide such services or training of disaster workers, to victims of major disasters in order to relieve mental health problems caused or aggravated by such major disaster or its aftermath.” According to CMHS officials, much of the services provided are of an outreach nature, such as visiting homes, schools, disaster application centers, and senior citizens homes. FEMA’s draft crisis-counseling program handbook, prepared as a reference for state and local government, states that eligible activities under the immediate services program include screening, diagnostic, and crisis-counseling techniques, as well as outreach services, such as public information and community networking, which can be applied to meet mental health needs immediately after a major disaster. The immediate services program runs for 60 days, but extensions, generally of 30 days, may be granted if requested by the state. The regular program funds further screening and diagnostic techniques, short-term crisis counseling, community outreach, consultation, and education services that can be applied to meet mental health needs precipitated by the disaster. Prolonged psychotherapy measures are not eligible for program funding. The regular program generally runs for up to 9 months following the disaster. Individuals are eligible for crisis-counseling services if they were residents of the designated disaster area or were located in the area at the time of the disaster and if they have problems of a psychological or emotional nature caused or aggravated by the disaster. A state’s application for crisis-counseling funds must certify that existing state and local resources are inadequate and identify what the mental health needs are. Although it can be adjusted upward or downward on the basis of specific information, a formula has been developed to estimate the number of persons in need of crisis-counseling assistance on the basis of past experience. The formula takes into account the number of fatalities, injuries, homes destroyed or damaged, and unemployment resulting from the disaster. As provided in FEMA’s instructions, FEMA makes the funds available to CMHS, which awards crisis-counseling grants to states—normally to the state’s department of mental health. The states, in turn, disburse funds to the service providers and local government. CMHS provides the primary federal oversight by reviewing and evaluating the application and reports submitted by state agencies. Both the application and periodic reporting processes for the regular program are detailed and comprehensive. The application provides estimates of the nature of the need, the number of people needing assistance, and detailed cost estimates. The reports provide information on, among other things, the numbers of people that received assistance, the types of problems that victims experienced, and the actual program costs incurred. In addition, FEMA’s instructions for the program provide that CMHS and FEMA are to make a joint site visit early in the project to ensure that the program is being administered according to the approved application. According to FEMA officials, other program controls include possible audits performed under the Single Audit Act or by the Inspector General. Following the Northridge earthquake, the state of California applied for $12.8 million in immediate-services-program funding for Los Angeles and Ventura counties on January 31, 1994; FEMA approved the funding on February 1. (In March, the state requested a funding increase to $13.6 million, which FEMA approved.) The regular 9-month program was approved for an additional $22.4 million. Together, the approved funding totaled $36 million. Not all of the approved funds were used, however; actual expenditures totaled about $32 million. “There did not appear to be any weaknesses in the relationship and flow of funds to and from providers.” “At the Ventura County site, accounting records were reviewed to ensure compliance with policies and procedures and allowability of expenditures. . . . No deficiencies were observed in the accounting system or records reviewed.” Similar comments were made regarding Los Angeles County. The nature of the Fast Track process—providing disaster victims with expedited disaster housing assistance without first verifying their eligibility—represents a trade-off between the risk of delaying needed aid to certain disaster victims and the risk of disbursing funds to ineligible recipients or in excess of recipients’ needs. The absence of established guidance required FEMA to implement the process on an ad hoc basis following the Northridge earthquake in a crisis atmosphere less conducive to the careful consideration of alternatives. A future large-scale disaster could engender a need for the Fast Track process. If so, FEMA’s continuing lack of guidance for implementing it could allow continued inequitable treatment of disaster victims and the provision of more temporary housing assistance than warranted. These problems could be lessened by establishing formal guidance for the process and incorporating it into the directive for the temporary housing assistance program. FEMA’s Office of Inspector General reached similar conclusions in its January 1993 report on Hurricane Andrew. Also, FEMA’s after-action report on Northridge stated a need to develop guidance “that clarif assistance requirements and conditions under which fast tracking will occur.” The Director of FEMA should develop written guidance for the Fast Track process that specifies when and under what circumstances the process will be used explains how to implement the process, including identifying eligible victims and avoiding payments in excess of needs. We provided FEMA with a draft copy of this report for review and comment. In its written comments, FEMA said that the report’s description of the problems faced in providing assistance for the Northridge earthquake victims was comprehensive and balanced. FEMA agreed with our recommendation that guidance should be developed for the Fast Track process, stating that the agency would establish formal guidance for the process and incorporate it into the guidance for the temporary housing program. FEMA also commented that in the last 3 years, it has strengthened its application registration and processing capabilities by building and refining three teleregistration and processing centers and has strengthened its inspection capability by establishing three national inspection service contracts to train inspectors. In addition, FEMA mentioned that it is raising the threshold at which it will consider implementing the Fast Track process. FEMA also suggested some revisions to our report for technical accuracy, which have been incorporated where appropriate. FEMA’s written comments are contained in appendix V. To examine the authority and rationale for the Fast Track process, we reviewed the legislation authorizing the disaster assistance housing program; the Stafford Act, as amended; and FEMA’s regulations for implementing temporary housing assistance (44 C.F.R. § 206.101). We also requested from FEMA an explanation of its legal basis to implement the process. (See app. III for FEMA’s written response.) To examine FEMA’s experience with the Fast Track process in Northridge, including whether FEMA adopted its Inspector General’s previous recommendations on the Fast Track process and how FEMA determined the geographic areas included in Fast Track, we interviewed FEMA officials from FEMA’s headquarters; FEMA’s OIG office; the Disaster Finance Center and the National Processing Services Center at Mt. Weather, Virginia; and the Disaster Field Office in Pasadena, California (which was responsible for administering FEMA’s assistance to Northridge earthquake victims). We reviewed OIG’s prior studies on the housing program, Fast Track, and crisis-counseling program, and information used by the Disaster Field Office in determining the geographic areas included in Fast Track. We also reviewed FEMA’s news releases, internal memorandums on implementing the Fast Track process, and post-disaster internal assessments. To examine the advantages and disadvantages of Fast Track, we interviewed officials from FEMA’s OIG; FEMA’s Response and Recovery Directorate, including the National Processing Services Center; the Disaster Finance Center; the Disaster Field Office in Pasadena; and the state of California’s Office of Emergency Services. We also reviewed press articles, FEMA’s news releases, internal memos on implementing the Fast Track process, post-disaster internal assessments, and a FEMA customer satisfaction survey conducted after the Northridge disaster. For information on the amounts of erroneous payments and subsequent recoveries, we relied primarily on data that we were provided with from the Disaster Finance Center’s database containing financial information on recovery efforts. FEMA’s National Processing Services Center’s ADAMS database contained information on additional recoveries, but we were unable to extract this information from the ADAMS database. Our information thus omits some early cases in which disaster victims returned housing assistance funds. The archiving of paper documentation of housing assistance applications, inspections, and grants to an unstaffed repository near San Francisco limited our review to the information contained in these databases. Additionally, both FEMA’s Inspector General and program staff advised us that the ADAMS database was prone to inaccuracies and had a tendency to “crash” or take inordinate amounts of time when doing broad-based informational searches. To examine FEMA’s criteria and process for using crisis-counseling funds and ensuring that they were used for their authorized purpose, we interviewed officials from FEMA’s headquarters (including OIG) and its Pasadena field office; and the Department of Health and Human Services’ Center for Mental Health Studies. We examined numerous reports and studies, including FEMA’s regulations and guidance for implementing the crisis-counseling program; California’s crisis-counseling grant requests, application materials, and internal program memos; and final program and expenditure reports. To identify whether other federal disaster assistance programs provide assistance for victims prior to determining applicant eligibility, we contacted program officials within the Departments of Housing and Urban Development, Agriculture, Commerce, and Health and Human Services; the Environmental Protection Agency; and the Small Business Administration. Additionally, we reviewed FEMA’s catalog of federal disaster assistance programs, drew on our prior work on HUD and Agriculture disaster assistance programs, and reviewed guidance for implementing their programs. We performed our work from March through September 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees; the Director, FEMA; the Secretary of Health and Human Services; the Secretary of Agriculture; the Secretary of HUD; and the Director, Office of Management and Budget. We will make copies available to other interested parties upon request. If you or your staff have any questions, please call me on (202) 512-7631. Major contributors to this report are listed in appendix VI. We identified two federal programs, in addition to the Federal Emergency Management Agency’s (FEMA) Fast Track process for its temporary housing assistance program, that provide disaster assistance for individuals prior to verifying their eligibility: the Department of Agriculture’s disaster food stamp program and the Department of Housing and Urban Development’s (HUD) disaster housing program. Both programs may relax their initial requirements for verifying applicants’ eligibility, including income requirements, with subsequent reviews of applicants’ files to identify eligibility problems and, if necessary, take recovery actions. In both cases—as with the Fast Track program—the intent is to get the assistance to victims as quickly as possible. Under the first program, the Department of Agriculture provides disaster food stamps for eligible victims. When a state applies for assistance, the Secretary of Agriculture may approve the issuance of food stamps for up to 30 days to qualifying households within the disaster area. The disaster food stamp program is different from Agriculture’s regular food stamp program in that certain criteria used in determining eligibility for the disaster program are relaxed when determining eligibility. For example, regular requirements to verify criteria such as residency in the disaster area (as opposed to the project area for the regular program), work requirements, household members’ social security number, and the availability of financial resources, are either not included as criteria or verified “where possible.” After the food stamps have been distributed, the applicants’ files are then reviewed to identify problems, such as whether applicants received duplicate benefits. The state agency in charge of disseminating the assistance conducts this post-disaster review of a 10-percent sample of cases, up to a maximum sample size of 1,200 cases. The second program is administered by HUD, which provides housing assistance to disaster victims in the form of rental certificates or vouchers that are used by eligible families to rent housing units in privately owned rental housing. The assisted households may live in rental units of their choice as long as the units meet HUD’s standards for rent and quality. Generally, local public housing agencies administer the program, providing landlords with rent payments in compliance with a housing assistance payment contract between HUD and the owner. Two significant differences between the FEMA and HUD housing assistance programs are that the HUD program contains income eligibility requirements—the program is targeted only to very-low-income families—and it generally provides the assistance over a longer period of time. HUD’s income eligibility requirements are based on annual gross income and family size, and the assistance is guaranteed for a period of up to 18 months. While HUD normally verifies the income eligibility requirements of applicants, for severe disasters such as the Northridge earthquake, the Department allowed housing agencies to issue housing certificates without first fully verifying the applicants’ income eligibility. For the Northridge disaster, housing agencies were given 3 months from the time the assistance was provided to verify a victim’s income eligibility. The victims were notified that their assistance could be adjusted or terminated if the deferred verification found that they were ineligible. In response to congressional inquiries, HUD stated that delaying the verification helped allow the Department to provide housing assistance for victims during the first few days after the disaster. Damage nearly total. Large rock masses displaced. Lines of sight and level distorted. Objects thrown into the air. Railroad rails bent greatly. Underground pipelines completely out of service. Most masonry and frame structures destroyed with their foundations. Some well-built wooden structures and bridges destroyed. Serious damage to dams, dikes, and embankments. Large landslides. Water thrown on banks of canals, rivers, lakes, etc. Sand and mud shifted horizontally on beaches and flat land. Railroad rails bent slightly. General panic. Low-quality masonry destroyed; good-quality masonry seriously damaged. Frame structures, if not bolted, shifted off foundations. Frames racked. Serious damage to reservoirs. Underground pipes broken. Conspicuous cracks in ground. In alluviated areas, sand and mud ejected, earthquake fountains and sand craters. Steering of motor cars affected. Damage to ordinary-quality masonry; partial collapse. Some damage to good-quality masonry but not to reinforced masonry. Fall of stucco and some masonry walls. Twisting/falling of chimneys, factory stacks, monuments, towers, elevated tanks. Frame houses moved on foundations if not bolted down; loose panel walls thrown out. Decayed piling broken off. Branches broken from trees. Changes in flow or temperature of springs and wells. Cracks in wet ground and on steep slopes. Difficult to stand. Shaking noticed by drivers of motor cars. Hanging objects quiver. Furniture broken. Damage to low-quality masonry, including cracks. Weak chimneys broken at roof line. Fall of plaster, loose bricks, stones, tiles, cornices. Some cracks in ordinary-quality masonry. Waves on ponds; water turbid with mud. Small slides and caving in along sand or gravel banks. Large bells ring. Concrete irrigation ditches damaged. Shaking felt by all. Many frightened and run outdoors. Persons walk unsteadily. Windows, dishes, glassware broken, knickknacks, books, etc., off shelves. Pictures fall off walls. Furniture moved or overturned. Weak plaster and low-quality masonry cracked. Small bells ring (church, school). Trees, bushes shaken (visible or heard to rustle). Shaking felt outdoors. Duration estimated. Sleepers wakened. Liquids disturbed; some spilled. Small unstable objects displaced or upset. Doors swing open, close; shutters, pictures move. Pendulum clocks stop, start, change rate. Hanging objects swing. Vibration like passing of heavy trucks; sensation of a jolt like a ball striking the walls. Standing motor cars rock. Windows, dishes, doors rattle. Glasses clink. Crockery clashes. In the upper range of Modified Mercalli Intensity (MMI) level 4, wooden walls and frames creak. Shaking felt indoors. Hanging objects swing. Vibrations like passing of light trucks. Duration estimated. May not be recognized as earthquake. Shaking felt by persons at rest, on upper floors, or favorably placed. Shaking not felt. Marginal and long-period effects of large earthquakes. We analyzed the payments that FEMA designated for recovery to determine if they were concentrated in zone improvement plan (ZIP) codes that were erroneously designated for the Fast Track process. FEMA decided to use the Fast Track process for applicants residing in ZIP code areas with an MMI level of 8 or above. Our analysis showed that the inclusion of ZIP codes that did not meet this criterion in Fast Track did not have a significant effect on payments designated for recovery. Ninety-six percent of the disbursements still subject to recovery were made to applicants in ZIP codes of MMI intensities of at least 8. Table IV.1 shows, for the cases still subject to recovery, that 96 percent of the grants were in appropriately designated ZIP codes, as categorized by the MMI shaking intensity. (An analysis of the data developed for all grants designated for recovery and reported on by FEMA’s Inspector General gives much the same result. Thus, it appears that a more accurate designation of eligible ZIP codes would not have significantly reduced inappropriate disbursements at Northridge. Many of the errors may have been data entry errors rather than mistakes in selecting ZIP codes. The following are GAO’s comments on the Federal Emergency Management Agency’s letter dated September 25, 1997. 1. GAO revised the report to address FEMA’s comments numbered 1 through 12. 2. FEMA’s current policy does provide for the award of home repair funds when damages are more than a $100 minimum. However, our report notes that FEMA’s IG reported that FEMA was accepting damages of over $100 as evidence of an uninhabitable house, and that FEMA was also paying for repairs apparently not related to making the residence habitable, such as carpet replacement, rain gutters, drywall finishing, wall tiles, and paint. Because the statement is that of the FEMA IG, rather than GAO, we did not change the language involved. 3. FEMA’s updated figures were confirmed with table 1. 4. GAO revised the report to address FEMA’s comment. 5. In the agency comment section on page 23 of the report, we note FEMA’s comments about its recent efforts to strengthen its registration, inspection, and processing capability for future disasters and to raise the threshold at which FEMA would consider implementing the Fast Track process. Disaster Assistance: Improvements Needed in Determining Eligibility for Public Assistance (GAO/RCED-96-113, May 23, 1996). Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs (GAO/T-95-140, Mar. 16, 1995). GAO Work on Disaster Assistance (GAO/RCED-94-293R, Aug. 31, 1994). Los Angeles Earthquake: Opinions of Officials on Federal Impediments to Rebuilding (GAO/RCED-94-193, June 17, 1994). Disaster Management: Improving the Nation’s Response to Catastrophic Disasters (GAO/RCED-93-186, July 23, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined several issues pertaining to the Federal Emergency Management Agency's (FEMA) use of the Fast Track process and FEMA's crisis-counseling assistance to victims of the Northridge earthquake, focusing on: (1) the authority and rationale for the Fast Track process; (2) FEMA's experience with the Fast Track process in Northridge and whether the process was influenced by the Office of Inspector General's recommendations; (3) the advantages and disadvantages of the Fast Track process, including the amounts of payments that FEMA designated for recovery and subsequently recovered and the reasons for ineligibility; and (4) FEMA's criteria and process for providing crisis-counseling funds and ensuring their use for authorized purposes. GAO noted that: (1) the legislation authorizing FEMA's temporary housing assistance has no explicit provision for a process such as Fast Track; (2) however, as FEMA concluded, the act gives the agency wide latitude in providing expeditious assistance for disaster victims; (3) FEMA's rationale in implementing the Fast Track process following the Northridge earthquake was to assist the largest number of disaster victims in the shortest possible amount of time; (4) in implementing the process, FEMA experienced operational difficulties including the inconsistent application of criteria when designating zip codes; (5) because of these errors, not all Northridge victims in similar circumstances were treated the same; (6) FEMA also experienced constraints with the computer software used to process applications; (7) these difficulties, combined with an enormous volume of applications for assistance and FEMA's decisions on applicants' eligibility for payments made under both the regular and Fast Track processes, may have contributed to FEMA's provision of housing assistance beyond actual needs; (8) FEMA has not developed written guidance for implementing the Fast Track process, even though FEMA's Inspector General recommended establishing formal procedures after the Fast Track process's first (and only other) use in 1992; (9) a principal advantage of the Fast Track process is that it provides temporary housing assistance grants for some applicants more quickly than would the regular process; (10) according to FEMA officials involved in the response to the Northridge earthquake, Fast Track provided an intangible benefit by demonstrating to the victims and the general public that help was actually on the way; (11) a principal disadvantage to Fast Track is the relative loss of control over the disbursement of federal funds and the subsequent need to recover ineligible payments; (12) FEMA ultimately designated for recovery 6.7 percent ($9.6 million of $143 million) of the temporary housing assistance provided under the Fast Track process for 3,856 Northridge earthquake applicants; (13) FEMA provides crisis-counseling funding for screening and diagnosing individuals, short-term crisis counseling, community outreach, consultation, and educational services; and (14) for funds provided after the Northridge earthquake, FEMA officials said that they visited all service providers and that center officials evaluated their accounting procedures and controls and found them to be satisfactory.
GSA is the federal government’s real property manager, providing office space for most federal agencies. In this capacity, GSA is responsible for keeping the approximately 1,700 federal buildings it manages in good repair to ensure that the value of these assets is preserved and that tenants occupy safe and modern space. Maintaining these buildings is particularly challenging because many buildings in GSA’s portfolio are more than 50 years old, monumental in design, and historically significant. Unlike a private sector company, GSA cannot always dispose of a building simply because it would be economically advantageous to do so. GSA is responsible for identifying, funding, and completing needed repairs and alterations at the federal buildings it manages. These needs are identified primarily through detailed building inspections and evaluations done by GSA regional staff or private sector architect-engineering firms under contract with GSA. The scope of repair and alteration work varies, but the work generally falls into one of three broad categories: recurring repairs, such as periodic painting, and minor repairs of defective building systems that cost less than $10,000; nonrecurring repairs and alterations that cost more than $10,000, but less than a prospectus-level threshold ($1.99 million for fiscal year 2001 projects) that is adjusted annually; and major repairs and alterations estimated to cost more than the prospectus- level threshold. Building repair and alteration projects expected to cost more than the prospectus-level threshold cannot start unless they are approved by OMB and funded by Congress. To obtain approval for these projects, GSA provides OMB and Congress with a prospectus for each repair and alteration project included in its annual budget submission. The prospectus includes information on the size, cost, location, and other features of the proposed work; a justification for proceeding with the work; and an economic analysis of the alternatives to doing the requested repairs and alterations. On the basis of the individual prospectuses, OMB recommends funding for various proposed repair and alteration projects, and Congress decides whether or not to approve the funding. In addition to prospectus-level funding, OMB and Congress consider proposals for funding that GSA can use to complete nonrecurring projects costing less than the prospectus-level and recurring projects regardless of cost. For fiscal years 1995 through 2001, GSA was authorized about $2 billion dollars for prospectus-level projects and $2.3 billion for nonprospectus- level projects. This report deals primarily with GSA’s process for assessing and selecting prospectus-level projects. Repairs and alterations, as well as other capital and operating expenses associated with maintaining federal buildings, are financed by FBF, a revolving fund administered by GSA that was authorized and established by the Public Buildings Amendments of 1972. Beginning in 1975, FBF replaced direct appropriations to GSA as the primary means of financing the operating and capital costs associated with federal space. GSA charges federal agencies rent for the space that they occupy, and the receipts from the rent are deposited into FBF. In addition, Congress may appropriate additional money to the fund. Congress exercises control over FBF through the annual appropriations process that sets limits, known as obligational authority, on how much of the fund can be expended for various purposes. FBF revenues must first be used to meet its building operating expenses, such as payments for leased space and utility costs. Congress then allocates revenues between the two capital programs–the construction of new federal buildings and the repair and alteration of existing buildings. GSA headquarters management recommended 12 of the 27 prospectus- level design projects proposed by GSA regional staff for fiscal year 2001 funding. In examining the 27 projects, GSA officials used a multifaceted process that relied on empirical data and professional judgment coupled with specific selection criteria and computer analysis that compared each of the competing projects. The criteria included such factors as a project’s economic return, risk, and urgency. Each project examined was given a numerical score and ranked in priority order. The projects with the highest initial rankings usually became the projects that GSA recommended for funding. However, GSA recommended two projects for funding that were not among those with the highest initial ranking. GSA provided explanations for moving the lower ranked projects ahead of the higher ranked projects. GSA’s process resulted in buildings with well documented repair and alteration needs being recommended for funding in fiscal year 2001. Under the oversight of GSA’s headquarters, GSA’s regional staffs, who operate and maintain the federal buildings, are responsible for identifying the prospectus-level projects. GSA headquarters staff is responsible for establishing a coherent national program and budget request. Each year, GSA’s Capital Investment and Leasing Program Call plays a key role in the agency’s obtaining the necessary resources to maintain its buildings. This planning document, commonly referred to as the Program Call, is prepared each year by the portfolio management staff in GSA’s headquarters. The Program Call provides, among other things, the guidance and criteria that the regions are to follow in identifying and proposing prospectus-level repair and alteration projects for funding consideration. According to GSA officials, the Program Call for fiscal year 2001 emphasized that the regions should follow a portfolio rather than a traditional facilities management approach in proposing repair and alteration projects, and GSA headquarters would follow this approach in selecting the projects to be included in its budget request. GSA management decided to adopt the portfolio management approach because they believed it was a more effective way to manage real property and its Repairs and Alterations Program. Under a portfolio approach, GSA chooses to make reinvestment decisions on the basis of what is best for its overall inventory of buildings rather than the need to repair or modernize an individual building. The Program Call stressed that because of limited resources, the funds that were available for completing repair and alteration projects would be given to cost-effective projects with high income-producing potential. Thus, projects that would improve the buildings’ functionality and income-producing potential were favored over other repair and alteration projects, such as full building modernization. GSA recognized that by implementing this approach, some buildings, such as those in low-rent or declining markets, may receive limited repair and alteration funding and therefore would be maintained at a more basic level as compared to other buildings in the inventory. GSA officials also told us that they recognized that implementing the portfolio management approach meant that those buildings needing repairs and alterations that were not expected to increase rent revenues would face difficulty in competing for limited prospectus-level funding. This was true for the proposed repair and alteration projects that were assessed and ranked in fiscal year 2001. For example, recapturing vacant space or other revenue enhancement was a primary reason for selecting 9 of the 12 projects included in GSA’s fiscal year 2001 budget request. GSA officials said they recognized that they cannot totally ignore projects that do not increase rent revenues. They said that GSA plans to request $75 million in fiscal year 2002, and additional funds in future years, for prospectus-level projects that focus on keeping buildings operational and safe rather than on significantly increasing rent revenues. We agree that GSA cannot ignore nonrevenue-generating projects, especially when they involve health and safety risks to employees and visitors as discussed later in this report. GSA officials emphasize that they are very concerned with health and safety issues, which is evidenced by language in their fiscal year 2002 and 2003 Program Calls. GSA officials said their policy is to take action to alleviate immediate health and safety problems when they occur and continuously monitor potential problem areas with the intent of avoiding dangerous situations. GSA officials believe that no employee or visitor to a federal building faces imminent danger because the building is unsafe. However, GSA officials said that they must sometimes take a Band-Aid and monitoring approach as a result of limited funding, and this approach does not always remove the long- term risk associated with the deterioration that could cause health and safety problems. Given this, it is important that GSA continuously focus on buildings that have significant operational deficiencies and health and safety concerns, identify needed funding, and give sufficient funding priority to those projects that would effectively eliminate the deterioration that is causing, or likely to cause, significant health and safety problems. Using the guidance contained in the Program Call, their knowledge about the buildings’ physical condition and the needs of their tenants, GSA’s regional staff identified 27 repair and alteration design projects and submitted them to GSA headquarters for funding consideration in fiscal year 2001. According to the officials with whom we met, the regions are given a great deal of discretion in determining which buildings they propose for repair and alteration funding. These officials said that such discretion is needed because each building in GSA’s inventory is unique in its construction, operating systems, repair and alteration needs, and client agency needs. Our analysis of project proposals and supporting documentation as well as discussions with GSA staff in the three regions that we visited indicated that they had selected and prioritized the buildings for funding in fiscal year 2001 on the basis of detailed analyses and discussions about the condition of their buildings and the repair and alteration needs at these buildings. Furthermore, the prospectus-level projects submitted by the three regions requested funding to satisfy well- documented repair and alteration needs. Our analysis found that all were prepared in accordance with GSA’s criteria and guidance. Once the regions had identified their proposed projects, they submitted the proposals, along with all supporting data, to GSA headquarters for review and funding consideration. There, portfolio management staff and the Capital Investment Panel assessed the merits of each proposed project and ranked the projects with the aid of computer-based decisionmaking software. This software–Expert Choice–employs an analytic hierarchy process decisionmaking methodology. Five weighted criteria, which were developed by GSA’s Capital Investment Panel, were used to rank the projects competing in fiscal year 2001. These criteria considered, in weighted order, the (1) economic return—the project will generate additional revenue for the FBF, (2) project risk—the project will begin in the planned fiscal year and use the authorized funding, (3) project urgency—the project will correct building conditions that are unsafe or involve severe deterioration, (4) community planning—the project will protect the building’s historic significance and positively impact the local community, and (5) customer urgency—the project will have a positive impact on the tenant agencies’ operations or mission. According to GSA officials, the scores resulting from the computer analysis were a major part of the assessment process, but they were not the sole basis for deciding which prospectus-level repair and alteration projects should be recommended for funding. They said Expert Choice was never intended to, and it did not, replace the professional judgment and knowledge of those staff involved in assessing the merits of the proposed projects. Nonetheless, the computer-derived scores for 10 of the 12 repair and alteration projects that were included in GSA’s fiscal year 2001 budget request were among the highest scores for all 27 of the competing projects. Table 1 shows the scores for the 27 competing projects and identifies the 12 projects that were selected for inclusion in GSA’s fiscal year 2001 budget request to Congress. As can be seen in table 1, the building with the highest score–Federal Office Building 8 (FOB 8), which is in Washington, D.C., and has the Department of Health and Human Services as its major tenant—was not included in GSA’s fiscal year 2001 budget request. Similarly, the Eisenhower Executive Office Building (EEOB) was also not included in the budget request, even though its score of 69.1 was higher than nine of the buildings that were included in GSA’s funding request. Conversely, Federal Office Building 3 (FOB 3), which is in Suitland, MD, and primarily houses the Bureau of the Census, and the GSA Regional Office Building (GSA ROB) were included in the budget request, even though they both had lower scores–54.9 and 47.3, respectively—than some of the projects that were not recommended for funding. GSA officials explained that in each situation a unique set of circumstances affected the final decision of whether to include the prospectus-level project in the budget request. According to the officials with whom we spoke, FOB 8 was not included in GSA’s budget request because the regional office that originally submitted the proposed project–the National Capital Region–withdrew it from funding consideration after learning that a third party was interested in acquiring the building and converting it into a museum. GSA did not want to reinvest in FOB 8 if there was a chance that the building would not be retained. GSA headquarters staff decided not to include EEOB in the budget request, even though it scored 5th in the assessment process, because they believed that the project had not been adequately planned and there was too high a risk that the project could not be started in fiscal year 2001. In addition, regional officials told us that the Expert Choice score awarded to this building was too high because data pertaining to the expected economic return of the project were erroneously overstated when they were entered into the computer system. We were told there was no documentation to verify this assertion. On the other hand, FOB 3 was included in GSA’s budget request because additional information was considered after the project had been assessed by Expert Choice. According to a GSA official, this additional information showed that there could be an opportunity to move federal tenants from leased space into FOB 3, if additional space could be provided in the building. Therefore, GSA believed that funding a prospectus-level project at FOB 3 would provide this additional space. This assumption, in turn, led to an increase in the project’s potential economic return, which made it more competitive than other projects that competed for funding in fiscal year 2001. Similarly, information received after GSA had assessed the proposed projects also led to the inclusion of GSA ROB in the budget request. This additional information involved the decision by a major tenant to vacate approximately one-third of GSA ROB. According to a GSA official, the tenant’s decision to move out of the building created an opportunity for GSA to complete a major renovation of the vacant space, which is always less expensive than renovating occupied space. In summary, the unique circumstances surrounding the FOB 3 and GSA ROB projects made them more desirable for funding than other projects that had initially been ranked higher. In addition to FOB 8 and EEOB not being included in the fiscal year 2001 budget request, 13 other buildings involving prospectus-level projects were also not included in GSA’s budget request that year. Our review of GSA documents related to these proposed projects showed that all of them had building repair and alteration needs that were well documented, and most of these needs focused on building systems upgrades or modernization work. According to GSA officials, this type of work is necessary to keep buildings fully operational and therefore must be funded even though it usually does not increase rent revenue. For example, the New Orleans Courthouse needs major upgrades to its heating, ventilation, and air- conditioning system and its hot and cold water systems. These needs were identified in a detailed building inspection completed in 1995, but the work has not yet been performed. GSA’s regional office submitted a prospectus- level project requesting fiscal year 2001 funding to make the systems upgrades, but the project was not included in GSA’s 2001 budget request. The proposed project was not considered as competitive as other proposed projects that were assessed for funding consideration. The GSA regional officials with whom we spoke said that this work will remain in the inventory of unfunded repairs and alterations, and if the work is delayed much longer, the ability of the building’s tenant– the U.S. 5th Circuit Court of Appeals–to perform its mission could be adversely affected. As previously discussed, GSA considered funding support for 27 prospectus-level repair and alteration design projects in fiscal year 2001, but only 12 of these projects were included in its budget request that year. According to GSA, the remaining 15 projects were not included in the budget request because the anticipated amount of funding was insufficient to finance all 27 projects. However, the need for repairs and alterations included in the 15 unfunded projects did not simply disappear; instead, this work remains in GSA’s inventory of unfunded repairs and alterations. GSA data show that the inventory of unfunded prospectus- and nonprospectus-level work is in the billions of dollars. Furthermore, the existence and growth of such an inventory are not new. Over the past decade, we have reported several times on GSA’s struggles to meet its repair and alteration needs and on the growing inventory of work that has resulted. In March 2000, we reported that at the end of fiscal year 1999, GSA data showed that it had an unfunded inventory of approximately $4 billion in repairs and alterations that needed to be completed at its buildings. This inventory included both prospectus and nonprospectus work items. Our report concluded that inadequate program data on repairs and alterations, the lack of a strategic plan for managing repair and alteration projects, and limited funding were three long- standing obstacles that impeded GSA’s ability to satisfy its repair and alteration needs. The report noted that GSA program managers were working to improve the quality of program data and to develop a multiyear plan that would identify the prospectus-level repair and alteration work that needs to be funded over a 5-year period. GSA officials recognized then, as they do now, the need for accurate, consistent, and complete repair and alteration data. Program managers with whom we spoke agreed that such data are crucial if they are to determine the total repair and alteration needs and provide effective program management and oversight. They also recognized that a multiyear plan that identifies, in priority order, all prospectus-level repair and alteration projects would allow them to more easily target the buildings with the greatest needs, better allocate scarce resources, and monitor progress in reducing the repair and alterations inventory. The plan was also to provide decisionmakers with a context in which to judge how projects recommended for the current year funding relate to the long-term repair and alteration needs of federal buildings. GSA officials told us, however, that even when they improve data quality and institutionalize a multiyear approach for identifying and prioritizing prospectus-level repair and alteration requirements, funding limitations will likely remain a significant roadblock to effectively reducing the backlog of repair and alteration work. We agree that insufficient funding is a major obstacle that GSA faces, and we believe that it is likely to continue as an obstacle unless actions are taken to generate additional revenues to finance repairs and alterations. For example, GSA data show that over the 7-year period ending with fiscal year 2001, after OMB and congressional review, Congress authorized 63 percent of the approximately $6.8 billion in new obligational authority that GSA had initially requested for making building repairs and alterations. It should also be noted that during these 7 years, Congress approved only 50 percent of the $3.9 billion GSA had requested for prospectus-level repair and alteration projects. The following table shows the total amount of funding authority GSA requested and the amount of obligational authority Congress approved on an annual basis for 7 years for prospectus- and nonprospectus-level projects. According to GSA officials, these funding shortfalls contributed to the inventory of unfunded repair and alteration work. Furthermore, funding deficiencies are exacerbated by the increased demand for repairs and alterations associated with GSA’s aging buildings. In our March 2000 report, we pointed out that historically FBF has not produced sufficient resources to finance all repairs and alterations and at the same time cover the day-to-day operating costs of federal buildings and provide the funding needed to construct new buildings. This is evidenced by the fact that even though FBF averaged about $5.3 billion in annual revenues for each of the past 7 fiscal years, almost 90 percent of this money was spent for other purposes, such as building operating costs, lease space costs, and construction of new federal facilities. On average, only $606 million per year was used for completing repairs and alterations. Our report also pointed out that the inventory of unfunded repair and alteration work is not static–even as GSA completes repairs and alterations, new requirements are identified. On the basis of the analysis that we completed early last year, it was likely that the inventory of prospectus and nonprospectus repairs and alterations would grow over the next 5 years. Our analysis used GSA’s $900 million estimate as the amount of funding that it planned to request to finance repairs and alterations in each of fiscal years 2001 through 2005. We assumed that the cost of new repairs and alterations identified each year would range from $600 million to $1.2 billion. We calculated these amounts using the assumption that the cost of new work identified each year would range from 2 to 4 percent of the estimated $30 billion aggregate replacement cost of GSA’s portfolio of buildings. According to the National Research Council, these criteria have been widely quoted in the facilities management literature, and GSA officials agreed that our assumptions were reasonable. On the basis of these assumptions, we then projected that GSA’s inventory of repairs and alterations would range between $2.6 billion and $6.2 billion at the end of fiscal year 2005. It should be noted that Congress approved about $682 million for making repairs and alterations in fiscal year 2001—$218 million less than the estimated $900 million used in our analysis. Given this, our projected amounts of growth in the repair and alteration inventory may have been conservative. GSA officials are trying to develop alternative means of generating additional revenues to help pay for building repairs and alterations. These initiatives include investing in repair and alteration projects that return the most rent revenue to FBF and reducing building operating costs and redirecting these savings to capital investment, including repairs and alterations. In addition, GSA supported S. 2805, which was introduced during the 106th Congress and which would have authorized federal agencies, including GSA, to retain proceeds from several types of real property transactions for needed capital investment if it had been enacted. GSA is also developing standards that would help determine the type and scope of repairs and alterations needed to meet GSA’s long-term plan for each building. In addition to these initiatives, H.R. 3285 was introduced in fiscal year 2000. If it had been enacted, it would have authorized GSA to use public-private partnership arrangements to renovate and rehabilitate federal buildings. According to GSA officials, the portfolio management approach they are following is directed at reinvesting in buildings that will maximize the financial return for the portfolio as a whole. Thus, funding prospectus- level repair and alteration projects that recapture vacant space or otherwise increase FBF revenue best serve the overall portfolio. This reinvestment strategy assumes that by increasing future rent revenue, additional funding will be available to finance more building repairs and alterations. GSA is also attempting to reduce its building operating costs so that more FBF revenues can be used to make repairs and alterations. Recently, the International Development Research Council recognized GSA for reducing its operating costs to 15 percent below comparable expenses in the private sector. GSA officials explained that by reducing building operating costs, more money from FBF could be made available to finance repairs and alterations. GSA officials estimated that for the 27 months ending September 30, 2000, it avoided incurring over $300 million in leasing, cleaning, maintenance, and utility costs by paying lower rates than the private sector. A former GSA Public Buildings Service (PBS) Commissioner said that by reducing operating costs GSA could have additional funding to direct to, among other things, repairing, renovating, and modernizing public buildings. GSA officials told us that they plan to request congressional authority to spend a higher percentage of FBF revenue on repairs and alterations in future years. The President’s blueprint for the fiscal year 2002 budget proposed $827 million for GSA’s Repairs and Alterations Program. GSA supported S. 2805, which was introduced in June 2000. Among other things, it would have authorized federal agencies to retain proceeds from several types of real property transactions, such as the sale of unneeded assets, and use these proceeds to fund other things, including real property improvements. If such a bill were enacted, it would authorize federal agencies, including GSA, under prescribed conditions, to transfer, sell, sublease, and lease real property to other federal or nonfederal entities, and any proceeds from the transfer or disposition would be credited to each agency’s capital asset account. Any amounts credited to or deposited to this account could be used only to pay for capital asset expenditures. GSA supports such legislation because it would provide an incentive for land-holding agencies to better manage their real property. GSA has not estimated how much revenue would be generated if it were granted such authority or what impact such authority would have on its overall repair and alteration inventory. However, GSA officials believe that any additional revenue would be an improvement over the current situation and would function as an incentive. As we pointed out in our recent testimony, both the National Research Council and we believe that such incentives are needed to encourage agencies to better manage their assets. GSA also believes such authority makes sense because it would make the operations of federal land-holding agencies more consistent with those of private companies and would create opportunities for cost avoidance, reduce the number of mission-deficient properties under federal ownership, and improve the quality and productivity of federal facilities. GSA also plans to implement standards that will help determine the type and scope of repair and alteration work to be done at a building on the basis of, among other things, how long GSA plans to retain the building. The standards, which will be used in conjunction with a computer software package developed and used by the private sector to help estimate repair and alteration costs, are intended to help determine a cost- effective level of reinvestment that maintains an asset’s value and income potential. For example, the standard for repairs and alterations that would be made at a historic building that is expected to remain in the inventory, like EEOB, would be much higher than for a building that is to be retained for a shorter time. In a building like EEOB, the standard may justify installing ceramic tile with a higher initial cost and a longer life rather than carpeting because this could lead to a lower life-cycle cost. Likewise, GSA may opt to replace a heating, ventilation, and air conditioning system in an EEOB-type building, but only repair the existing system in a building that is a potential candidate for disposal. According to GSA officials, selecting options that make the most sense in terms of life-cycle costs could make more FBF funds available for repair and alteration needs in the long run. A GSA official said this practice is consistent with what he has been told is used in the private sector. No time frame has been established to develop a final position on this initiative. Another effort intended to address repair and alteration needs was H.R. 3285, which was introduced in fiscal year 2000. This bill would have authorized GSA, under specific circumstances, to use public-private partnerships to develop, renovate, or rehabilitate facilities. Under these partnerships, the nongovernmental entity would lease federal property and develop, rehabilitate, or renovate it for use, in whole or in part, by executive agencies of the federal government. From the government’s perspective, the primary purpose of the partnership effort would have been to enhance the functional and economic efficiency of the real property. The nongovernmental entity would have exercised control of the partnership and received a majority interest in the profits of the partnership. GSA’s revenues from the partnership could have been used to make physical improvements to other federal real property. These funds would have been deposited in a fund set up for this purpose. After a specified period of time, the partnership expires. The idea of public-private partnership arrangements is not new. Congress has enacted legislation that provides certain agencies with a statutory basis to enter into partnerships and retain the revenue they receive from them. Our February 1999 report on federal public-private partnerships discussed six public-private partnerships that involved the National Park Service, the Department of Veterans Affairs (VA), and the Postal Service and reported positive outcomes. For example, Congress passed legislation in August 1991 that authorized the Secretary of VA to enter into public- private partnerships through enhanced leasing authority. This legislation authorized VA to lease its properties and retain the resulting revenues. As of June 1998, VA had entered into 10 partnerships through its enhanced leasing authority, and VA officials estimated that $25 million in savings have resulted from lower construction, operation, and maintenance costs. VA officials told us they are extremely pleased with the authority. In testifying on S. 2805 and H.R. 3285, we said that the ability to retain proceeds from real property transactions and the opportunity to use public-private partnerships should help federal property managers become better stewards of the nation’s assets and more effectively sustain the taxpayers’ investment. In considering whether to authorize GSA to retain all or some proceeds from real property transactions, it would be important to ensure that Congress continue its appropriations control and oversight of how the proceeds are used. Congress could do this by using the appropriations process to review and approve GSA’s proposed use of the proceeds for prospectus and nonprospectus projects. It is also important that these initiatives be evaluated to determine whether they have had significant impact on reducing the repair and alteration backlog and whether the continued use of these funds for repairs and alterations work reflects the most appropriate investment for the government as a whole. Furthermore, public-private partnership arrangements should be undertaken only when they reflect the best economic value available for the government. Federal buildings, as any other physical structure, tend to deteriorate and become obsolete when needed repairs and alterations are delayed or not made. In 1991, we reported that because of delays in reinvesting in federal buildings, over one-third of the 25 buildings that we analyzed needed major repairs and alterations. These needs included repairing or replacing leaking roofs and plumbing systems, installing fire alarm and sprinkler systems, and upgrading electrical and heating and cooling systems. We also reported that the condition of federal buildings had contributed to poor quality working space for employees, impeded agencies’ operations, and in some instances jeopardized employees’ health and safety. In 1998, a National Research Council report described the physical condition of federal facilities as deteriorating. The report concluded that this deterioration, in part, occurred because of continuous delays in completing necessary maintenance and repairs to the facilities. More recently, our analysis of GSA data found that at the end of fiscal year 1999, 44 federal buildings needed repairs and alterations estimated to cost over $20 million per building. Conditions similar to those described above exist today at some federal buildings. Our analyses of six federal buildings illustrate how the lack of investment in building repairs and alterations can lead to deterioration of the government’s buildings and other more serious consequences. For example, our review of available documentation and specific observations of FOB 3 located in Suitland, MD, showed that the heating, ventilation, and air conditioning system is incapable of providing proper air circulation or maintaining desired temperatures throughout the building and results in higher operating costs. The air ventilation system is currently inoperable and has been turned off since the early 1970s. This has resulted in the building containing levels of carbon dioxide that exceed industry standards, thereby exposing tenants to unacceptable conditions. Opening the windows was a proposed solution to this problem, but this is not always possible because windows are often painted shut with lead-based paint that may be peeling and chipping. Opening such windows could release lead into the air and create a potential health hazard. Figure 2 shows one of the windows with peeling and chipping paint. Moreover, available documentation verified that the building’s water is not safe for drinking because it contains metal contaminants. Therefore, GSA must supply, at an added cost, bottled water for the building tenants. Figure 1 on page 4 of this report shows a water fountain with a sign warning tenants not to drink the water and shows the bottled water provided by GSA to alleviate this problem. Another problem is water infiltrating the building. Water comes through the roof, from leaking pipes, and from air conditioning unit condensation. Officials from the Bureau of the Census had data showing that in fiscal year 2000 they reported more than 500 leaks. They further said that water leaks often result in damage to ceilings, furniture, and equipment. GSA and Census officials said that leaks, especially condensation in the air conditioner units, can also lead to mildew contamination, which can introduce microorganisms into the air that can make sensitive individuals ill. GSA officials responsible for operating and maintaining FOB 3 have been aware of these and other needed repairs and alterations for many years. According to these officials, until fiscal year 2001, this building was not considered competitive for repair and alteration funding when compared to the critical needs of other GSA buildings. We found a similar situation while completing our work at EEOB in Washington, D.C. According to the building’s architect and engineering report, it is one of the nation’s grandest and most historic buildings. Our review of the repair and alteration needs found that the building has seriously deteriorated and outdated electrical; plumbing; heating, ventilation, and air conditioning; and domestic water supply systems. A main concern of GSA staff is the potential danger associated with the condition and placement of these building systems. For example, as figure 3 illustrates, the electrical, steam, and water supply systems clutter the ceiling in the main corridor of the basement. According to GSA officials, danger exists because old electrical wiring is located near aged steam and water pipes, which burst a few times each year. In fact, GSA cited one example when a steam pipe ruptured in a historic library and did over $150,000 in damage to ornamental metal finishes as well as other damage to walls and the pipe for which GSA did not have an estimate. GSA officials are particularly concerned about pipe bursts because if moisture from the broken pipe makes contact with a bare wire, a short could occur that could shut down a portion or all of the building and cause an electrical fire with noxious fumes. GSA staff said doing repairs in EEOB could be hampered because access to problem areas may be obstructed by other building systems and identifying problem wiring might be difficult because some wiring is not documented. Another serious concern with the electrical system was expressed by the Associate Director for the Facilities Management Division in EOP, who said that the current electrical system is not capable of handling 21st century office technology, which is critical to tenant agencies’ accomplishing their missions. According to the architect and engineering report, other concerns exist with the building. The sewer system, which is over 100 years old, is inefficient and outdated and frequently backs up, causing unpleasant smells and potential health concerns. Numerous instances of water infiltration and resulting damage have occurred because of leaks in the roof and the building’s exterior walls. GSA provided a list of 18 rooms that have had recurrent problems with water damaging the walls. Figure 4 shows a wall, which is usually covered with a piece of painted plywood, in one of these rooms. GSA officials have given up trying to repair this wall because they have not found, and thus cannot repair, the source of the leak, and water comes in so quickly that the plaster collapses before it can harden. Figure 5 shows a deteriorating wall that resulted from water infiltrating the building. Given the historic significance of the building, its aesthetic appearance is important, and crumbling walls and peeling paint detract from this appearance. Figure 6 shows how bundles of electrical wires run outside the walls and detract from the building’s appearance. In addition to these concerns, GSA officials said that the air conditioning system, which uses about 250 individual window units, is outdated and not very efficient in cooling the building or conserving energy. Adding a modern system is a major undertaking because it would involve running wiring and ductwork throughout the building. GSA officials pointed out that it is difficult to do needed repairs and alterations at EEOB because of some rather unique circumstances. For example, relocating tenants so major repairs can be done is often difficult because many tenants need high security on their communications systems, and GSA cannot easily provide this in many locations within the building. Another problem is that some tenants operate 24 hours per day and 7 days per week, so finding a time when repairs can be done that does not inconvenience the tenants is difficult. GSA’s data show that in addition to the $25.2 million dollars GSA received in fiscal year 1999 for repairs and alterations at EEOB, an estimated $216.1 million is still needed to make additional repairs and alterations, many of which have been known since at least 1984. At the Federal Courthouse located in Muskogee, OK, we found conditions that could expose federal employees to unsafe and/or unhealthy situations. For example, the building does not have a fire sprinkler system on any of its five occupied floors. A private sector engineering study described this condition as an unacceptable risk for loss of life in the event of a fire. The study said that other fire protection improvements, including correcting a dead-end corridor and stairways, installing more smoke detectors, and replacing the outdated fire alarm system with a state-of-the- art system, are also needed. According to one major tenant, the U.S. Marshals Service, the building suffers from a serious security flaw because the prisoner holding area is interconnected with the Marshals’ office, which, in turn, opens into a public corridor. This condition means that when the Marshals are transporting a defendant from the holding area to a courtroom, there is always an opportunity for confrontation between the prisoner and federal judges, court staff, and even the public. In addition, available documentation shows that all building systems are in poor condition and need to be upgraded; and the roof, which was installed in 1937, leaks. GSA regional officials have known about most of the repair and alteration needs at the Muskogee Courthouse since 1993. In fiscal year 1995, the region first began submitting a prospectus-level project to make these repairs and alterations, but it was not until 2001 that GSA headquarters supported funding for a project. The other buildings—the Henry M. Jackson Federal Building in Seattle, WA; the A.J. Celebrezze Federal Building in Cleveland, OH; and the Earle Cabell/Santa Fe Federal Building/Courthouse in Dallas, TX—that we visited also had major repair and alteration needs, including significant water infiltration problems; outdated and inefficient heating, ventilation, and air conditioning systems; building structures that do not meet current seismic requirements; and an antiquated, inefficient, and unsafe elevator system. For example, according to GSA officials at the federal building in Seattle, WA, the elevators do not comply with seismic requirements, which could be significant given the recent major earthquake in the Seattle area. According to GSA officials, the elevators have also proven to be problematic in that they do not stop level with the floor, and one rider has tripped and been injured. Figure 7 shows an elevator not stopping level with the floor. GSA officials pointed out that other consequences result when repairs and alterations are not done in a timely fashion. They said that FBF loses long- term revenue when limited funding prevents them from renovating vacant space in government-owned buildings that could be used instead of costly leased space to house federal agencies. In addition, the ultimate cost of completing delayed repairs and alterations may escalate because of inflation and increases in the severity of the problems caused by the delays. GSA officials recognize that the physical condition of many federal buildings is far from ideal, that a significant inventory of repair and alteration work exists, and that some buildings cannot support 21st century operations. They pointed out that given the age of their inventory and the limited resources available to fund repairs and alterations, GSA takes pride in its ability to keep such buildings operational far beyond their normal life expectancy. We recognize that the building deficiencies discussed above are not necessarily representative of the condition of all federal buildings. In addition, GSA has recently received funding to do design repair and alteration work at five of the buildings we visited and design and construction funding for some fire safety improvements at EEOB. However, we believe, as do GSA officials, that there is ample evidence to suggest that many of the government’s aging buildings are deteriorating and becoming obsolete because needed repairs and alterations are not made in a timely way. Appendix II provides specific details on the condition of the six buildings that we visited in doing our work. GSA’s multifaceted prospectus-level repair and alteration selection process identified needed projects for funding in fiscal year 2001. GSA used defined criteria and professional judgement to rank and select projects. When GSA officials recommended projects with lower initial rankings for funding, they provided explanations for their decisions. However, insufficient funding remains a major obstacle for GSA because there are more projects than funds to pay for them. All 27 proposed projects that competed for fiscal year 2001 funding appeared adequately justified and worthy of funding. However, due to budget limitations, GSA could recommend only 12 for funding. Therefore, the other 15 projects remain in GSA’s multibillion-dollar repair and alteration inventory. As discussed earlier, GSA faces several long-standing obstacles in satisfying its repair and alteration needs. Although GSA is working to overcome some of the obstacles by improving data quality and strategic planning, GSA believes that funding limitations will likely continue to be a major roadblock in reducing the significant backlog of repair and alteration requirements. Without adequate funding, the backlog of repair and alteration needs will continue to grow, some federal buildings will continue to have health and safety concerns, and others may deteriorate to the point where federal tenants and their visitors may be subjected to worsening health and safety conditions. In addition, federal agencies may occupy space that may no longer meets their operational needs and may be less efficient to operate. Funding limitations and the backlog of repair and alteration work are not new issues. Over the last decade, GSA has struggled to satisfy its multibillion-dollar repair and alteration needs in federal buildings. The cost of repairs and alterations are typically paid from the FBF, which averaged $5.3 billion in annual revenues for each of the 7 years ending with fiscal year 2001. However, most of this money is committed to leased space costs, operating costs, and construction of new federal facilities. In fact, on average only $606 million was available for making repairs and alterations over the 7-year period. If funding remains an obstacle, it will be very difficult for GSA to preserve the value of its buildings and reduce the backlog of needed repairs and alterations. GSA recognizes that it needs to develop alternative approaches to reducing the significant backlog of repair and alteration needs and is taking actions aimed at doing so. As discussed earlier in this report, GSA program officials now give the highest funding priority to those repair and alteration prospectus-level projects that have the greatest potential to return more rent revenue to FBF. In adopting this strategy, GSA officials recognize that nonrevenue-producing projects cannot be ignored because certain buildings have serious operational and health and safety deficiencies that need immediate attention, and GSA has plans to set aside funding for these projects in future years. We concur that nonrevenue- producing projects cannot be ignored as evidenced by the operational deficiencies and health and safety concerns documented at the buildings we visited. Furthermore, although GSA officials believe that no employee or visitor to a federal building faces imminent danger because its buildings are unsafe, evidence that we collected at the buildings visited, such as no sprinkler systems, unacceptable levels of carbon dioxide, leaks that could cause electrical fires and release noxious fumes, and problematic elevators, suggests that significant health and safety concerns exist. It is our view that health and safety issues may need to be more important factors in making project-funding decisions. GSA officials are also making an effort to reduce operating costs, which may make more funding available for needed capital investment, and support legislation that would give it authority to retain the revenues from real property transactions, such as the sales of assets no longer needed by the government. In addition, legislation was proposed that would authorize GSA to enter into public-private partnership arrangements to rehabilitate and renovate federal facilities. GSA’s initiatives to try to increase FBF funding and reduce the significant backlog of repairs and alterations are steps in the right direction, and efforts to aggressively pursue these and other alternative strategies should continue. Given this, we are suggesting that Congress consider giving GSA greater flexibility to explore and experiment with funding alternatives when they reflect the best economic value available for the government. Funding limitations over the years and a need to find a more effective way to manage its repair and alterations program led GSA to adopt a portfolio management approach to funding prospectus-level repair and alteration projects. Under this approach, GSA makes reinvestment decisions on the basis on the needs of overall inventory rather than those of an individual building. GSA ranks competing repair and alteration projects using established weighted criteria including economic return; project risk; project urgency, including health and safety issues; community planning; and customer urgency. Given the evidence related to health and safety issues at the buildings visited, we recommend that GSA’s Administrator reexamine the weighting of health and safety criteria to ensure that sufficient priority is being given to funding repair and alteration projects that would prevent or resolve significant health and safety problems in federal buildings. Congress should consider providing the Administrator of GSA the authority to experiment with funding alternatives, such as exploring public-private partnerships when they reflect the best economic value available for the federal government and retaining funds from real property transactions, like the sale of unneeded assets. If such authority is granted, Congress should continue its appropriation control and oversight over the use of any funds retained by GSA. On March 21, 2001, GSA’s Acting Commissioner for PBS, and GSA’s Acting Assistant Commissioner and Acting Deputy Assistant Commissioner for Portfolio Management, provided GSA’s oral comments on a draft of this report. These officials generally agreed with the thrust of the report and the recommendation. They said GSA’s approximately 200 million square feet of government-owned space is becoming more obsolete and in need of major repair and alterations, and GSA is continuing its efforts to better define the repair and alteration program needs. They emphasized that GSA has made and will continue to make health and safety issues a major factor in selecting repair and alteration projects for funding. They said that GSA will reexamine the criteria used to recommend the repair and alteration projects for funding in line with the report’s findings and recommendation. GSA officials also provided technical comments, which have been incorporated as appropriate. On March 21, 2001, OMB’s Justice/GSA budget review staff provided oral technical comments, which we incorporated where appropriate. On March 23, 2001, a Special Assistant to the President and Director, Office of Administration in EOP, said that, on the basis of the first 60 days in office, he concurred that EEOB needs major renovations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we will not distribute it until 15 days from its issue date. At that time we will send copies of the report to the Chairmen and Ranking Minority Members of committees with jurisdiction over GSA; the Honorable Mitchell E. Daniels, Jr., Director of OMB; and Thurman M. Davis, the Acting Administrator of GSA. We will make copies available to others on request. Major contributors to this report were Joshua Bartzen, James Cooksey, Bill Dowdal, Robert Rivas, and Gerald Stankosky. If you or your staff have any questions, please contact me on (202) 512-8387 or at ungarb@gao.gov. Our objectives were to (1) examine the General Services Administration’s (GSA) process for assessing and selecting prospectus-level repair and alteration design projects for funding, (2) identify any obstacles that impede GSA from satisfying its repair and alteration requirements, and (3) document consequences associated with deferring needed repairs and alterations at selected buildings. We did our work at GSA’s Public Buildings Service (PBS) headquarters located in Washington, D.C., and at 3 of GSA’s 11 regional offices. The regions that we visited were the National Capital Region located in Washington, D.C.; Greater Southwest Region located in Fort Worth, TX; and Northwest/Arctic Region located in Auburn, WA. These regions were selected for review to provide geographical dispersion. To meet our first objective, we reviewed GSA’s policy and procedures applicable to the repairs and alterations at federal buildings that are funded through the prospectus process. We obtained and completed a detailed examination of GSA’s Fiscal Year 2001 Capital Investment and Leasing Program Call, which contained the guidance that GSA staff were to follow when identifying, documenting, and selecting the repair and alteration projects that were submitted for funding consideration that year. We also reviewed and familiarized ourselves with the Program Calls pertaining to fiscal years 2000, 2002, and 2003 repair and alteration work. We discussed these Program Calls and the overall building repair and alteration program with GSA staff in both headquarters and the three regions that we visited. We then completed detailed analyses of the data related to the 27 repair and alteration design projects that GSA’s regions submitted for funding consideration in fiscal year 2001. As part of our analysis, we discussed, examined, and documented the processes, methodologies, and criteria used by the staff in the regions that we visited when they identified and prioritized the repair and alteration work that was included in the 27 projects submitted to GSA headquarters for review and funding support. Next, we examined GSA’s fiscal year 2001 budget request and determined how and why the 12 design repair and alteration projects that were included in the budget request were selected. In accomplishing this work, we discussed, examined, and documented the process, methodology, and criteria used in assessing the merits of the proposed prospectus-level projects. We developed a general understanding of how the computer- based decisionmaking software–Expert Choice–was used in ranking competing projects and how the criteria used in assessing the projects were developed and used. We determined whether GSA followed its prescribed process and criteria when it assessed and recommended projects for funding in fiscal year 2001 and whether GSA staff could provide explanations for recommending projects with lower initial rankings for funding. We did not independently determine if the projects recommended for funding in fiscal year 2001 represented the best or were the most urgently needed repair and alteration projects in GSA’s inventory. To meet our second objective, we first reviewed our prior reports dating back to 1991 to determine the extent and nature of the obstacles that GSA had previously encountered in satisfying its building repair and alteration needs. We then held discussions with headquarters and regional staff about the obstacles that have impeded, and are still impeding, the completion of identified repairs and alterations and GSA’s efforts to overcome these obstacles. We reviewed GSA’s budget submissions and appropriations acts as well as the Federal Buildings Fund (FBF), as it relates to the financing of repair and alteration work. We determined the total revenue generated by FBF in each of the past 7 years and the amounts of funding that were available to finance repairs and alterations. We also determined the total amounts of funding requested by GSA to finance building repairs and alterations in fiscal years 1995 through 2001 and then compared the amounts requested with the amounts of new obligational authority approved by Congress. To meet our third objective, we reviewed our previous reports, as well as a 1998 report prepared by the National Research Council, that document the physical condition of federal facilities and discuss the known and possible consequences associated with delaying or not doing needed repairs and alterations. We held discussions with GSA officials in headquarters and the three regions that we visited about the condition of the overall federal building portfolio. We then visited, observed, and documented the physical condition of six federal buildings located throughout the country. These buildings included the Eisenhower Executive Office Building (EEOB) located in Washington, D.C.; Federal Office Building 3 (FOB 3) located in Suitland, MD; the Celebreeze Federal Building located in Cleveland, OH; the Earle Cabell/Santa Fe Federal Building/Courthouse located in Dallas, TX; the U.S. Courthouse located in Muskogee, OK; and the Jackson Federal Building located in Seattle, WA. After consulting with congressional staff, we selected these buildings for detailed review because they varied in size and use, provided geographic dispersion, and had recently received prospectus-level funding to finance repair and alteration projects at each building. Specifically, GSA requested and received design repair and alteration funding for the Earle Cabell/Santa Fe Federal Building/Courthouse and the Jackson Federal Building in fiscal year 2000; and for the Celebreeze Federal Building, FOB 3, and the Muskogee Courthouse in fiscal year 2001. GSA also received fiscal year 1999 funding primarily for a prospectus-level fire safety improvement project in EEOB. However, GSA did not receive design funding for EEOB in fiscal year 2001. We met with PBS officials who operate and maintain these buildings to discuss the condition of the buildings and the consequences associated with not doing needed repairs and alterations. We also met with major tenants at each of the six buildings to discuss what impact, if any, GSA’s failure to complete building repairs and alterations had on the agencies’ operations. We reviewed various reports, including building engineering reports, prospectus development studies, and other documents, that describe the condition of the buildings and the repairs and alterations that need to be made. Lastly, we obtained and analyzed information on the repairs and alterations that had been completed at each of the buildings during fiscal years 1995 through 2000 and those repairs and alterations that still need to be completed. We did not do a complete reliability assessment of GSA’s repair and alteration data used in our review. However, we did limited testing of the data and adjusted the data used in our analysis when we found any discrepancies. We did not independently validate GSA’s cost estimates for needed repair and alteration work. The results of our work at the six selected buildings cannot be projected to any other building(s) in GSA’s inventory. We did our work between July 2000 and February 2001 in accordance with generally accepted government auditing standards. On March 5, 2001, we requested comments on a draft of this report from the Acting Administrator of GSA, the Director of OMB, and a Special Assistant to the President and Director, Office of Administration in the Executive Office of the President. On March 21, 2001, we received oral and technical comments on a draft of this report from GSA’s PBS management staff. On March 21, 2001, we received oral technical comments from OMB’s Justice/GSA budget review staff. On March 23, 2001, we received comments from the Special Assistant to the President and Director, Office of Administration. The comments are discussed near the end of the letter. The following information should be considered when reading each building profile: The estimated cost of repairs cited is not expressed in constant-year dollars because GSA did not always have data that would allow us to do this. The dollar value represents the best estimate available at the time we did our work that GSA had for unfunded repairs and alterations. We did not independently validate GSA’s estimates. The date when repair and alteration needs were identified represents the earliest date we were able to document using available GSA records. GSA’s policy related to hazardous materials is to correct any situation that is an immediate danger to tenants (such as when they have been disturbed and released into the air). If the materials present no immediate danger, they are left alone. When these materials could be disturbed—for example, if repair work is done in an area where they are located—GSA undertakes abatement procedures to preclude exposing repairmen and building occupants to these materials and to prevent releasing the materials into the environment. GSA’s policy related to fire, accessibility, and life safety codes is to construct all buildings in line with existing standards and bring old buildings up to current standards when it would be a logical extension of other needed work. For example, adding a sprinkler system may be reasonable when GSA is doing extensive plumbing renovation work in a building. Information on current building conditions and consequences of delay is based on documentation in GSA files, discussions with knowledgeable GSA and tenant staff, and our observations during building visits. Location: Suitland, MD. Historic status: Eligible for, but not currently on, the National Register of Historic Places. Opened: 1942. Size: 731,000 gross square feet in 3 floors and partial basement. Major tenant(s): Bureau of the Census. Number of federal employees: About 3,200. Architecture: An uncomplicated, brick building that exemplifies “stripped classicism.” Estimated Cost of Needed Repairs: $132.9 million in addition to about $5.1 million it received for design in fiscal year 2001. Date When Needs Were Documented: 1990. The air in the building has levels of carbon dioxide that exceed industry standards. Office air conditioning units leaked or developed condensation over 200 times in fiscal year 2000. This situation facilitates the growth of some molds and mildews that can cause sensitive individuals to get sick if these substances are released into the air. According to Census officials, a few employees were granted workers compensation for absence caused by building-related problems. Building temperature cannot be controlled evenly, with some areas having uncomfortable temperatures. Energy and maintenance costs are higher. Appropriate repairs are not always possible because some repair parts are no longer manufactured. Such repairs adversely affect system efficiency. This building condition leads to lower tenant satisfaction. Energy costs are higher because of system inefficiency. Increased maintenance costs result from an increased number of breakdowns and power outages. Both systems will have difficulty accommodating 21st century technology. The building’s water contains metal contaminants. GSA incurs the incremental cost of providing bottled drinking water. Over 300 reported water leaks in fiscal year 2000 from the roof and water sources caused damage to floors, ceilings, furniture, and equipment. Census officials said that since 1995, 37 instances occurred where employees have slipped on water from leaks and been injured. Pipe breaks and leaks create circumstances that facilitate the growth of mold and mildew. If released into the air, these substances can make sensitive individuals sick. Deterioration leads to more frequent repairs and higher maintenance costs. This building condition leads to lower tenant satisfaction. A health risk exists if asbestos or lead are disturbed and released into the air. This building condition leads to lower tenant satisfaction and potential legal liability. A health and safety risk exists. Location: Washington, D.C. Historic status: It is on the National Register of Historic Places. Opened: 1888. Size: Over 670,000 gross square feet in 6 floors and a basement. Major tenant(s): The Executive Office of the President and support agencies. Number of federal employees: About 1,200. Architecture: One of the nation’s finest examples of the French Second Empire Style of architecture. Estimated Cost of Needed Repairs: $216.1 million in addition to about $25.2 million it received for design and construction in fiscal year 1999. Date When Needs Were Documented: 1984. The outdated system could potentially fail at any time or short out if a water or steam pipe bursts and water comes in contact with bare wire, which could shut down building and tenant operations. Electrical fires can create noxious fumes. The existing system will have difficulty accommodating 21st century telecommunications and other technology. Maintenance costs are higher because of more minor breakdowns in an aged system and difficulties related to accessing problem areas and diagnosing what wires are part of problem. Energy costs are higher because of system inefficiency. Sewers have backed up and caused unpleasant smells and created potential health concerns. Potential for electrical fires because storm drain system problems permit flooding in areas containing high-voltage equipment. Storm and sanitary systems are combined and do not meet environmental and health code requirements. A safety hazard exists if maintenance staff have to do electrical work in flooded areas. Rain leaders, which are pipes that drain water from the roof inside the building’s outer walls, have leaks that ultimately damage interior surfaces and could cause an electrical fire if the water comes in contact with aged, bare wire. Electrical fires can create noxious fumes. Water in continuous contact with interior structural supports can significantly damage metal, stone, and concrete, thus weakening the building’s structural integrity. These conditions can lower tenant satisfaction. The outdated and inefficient system could fail and not pump water. The outdated design of the water tanks has the potential to introduce lead contaminants into the water. The water holding tanks are rusting, which can result in holes that lead to flooded areas in building. A hole has developed once. Energy costs to operate the system are higher. The age of the heating and ventilation systems present the potential for them to fail at any time. Steam pipes burst several times per year and cause damage. In one case, a pipe burst in a historic library and did over $150,000 in damage to ornamental metal finishes as well as other damage for which GSA did not have an estimate. Maintenance costs are higher because minor breakdowns of aged heating and ventilation systems occur more frequently, and repairs involve overcoming difficulties that result when access to the problem area is obstructed. Radiators and window air conditioning units break down, which can result in uncomfortable temperatures. Energy costs are higher because of system inefficiency. Maintenance costs are higher because the 250 window air conditioning units break down often because of their age. Steam leaks and condensation from window air conditioning units facilitate the growth of mold and mildew that can cause sickness in sensitive individuals if the substances are released into the air. Water leaks deteriorate the building structure. Damage to interior surfaces, some of which require costly historic restoration, that increases maintenance costs and detracts from the historic beauty of the building. A potential safety hazard exists if water comes in contact with a bare wire behind the walls and causes an electrical fire, which can cause noxious fumes. A potential health hazard exists because water leaks facilitate the growth of molds and mildews that may cause sickness in sensitive individuals if these substances are released into the air. Maintenance costs are higher because recurrent cosmetic repairs are needed to correct the damage when the cause of the damage—a leak—is not repaired. Location: Muskogee, OK. Historic status: Eligible for, but not currently on, the National Register of Historic Places. Opened: 1915 (expanded in 1937). Size: 124,000 gross square feet in 5 floors and a basement. Major tenant(s): The U.S. 10th District Courts and the U.S. Marshals Service. Number of federal employees: About 250. Architecture: Excellent example of Neoclassic Revival/Second Renaissance Revival. Estimated Cost of Needed Repairs: $13.6 million in addition to about $800,000 it received for design in fiscal year 2001. Date When Needs Were Documented: 1993. A study by a private sector engineering firm described this situation as an unacceptable safety hazard because of the potential for the loss of life and property during a fire. Maintenance costs are higher because replacement parts for existing fire alarm system are hard to find. Not having a secure corridor that separates prisoners from judges, courthouse staff, or the public is a safety risk because it increases the possibility of a confrontation or an attempted jailbreak. Energy/utility costs are estimated to be 15 percent higher. The current system has extensive backup and leak problems that result in water damage to ceilings and walls. The restrooms do not meet Uniform Federal Accessibility Standards. Location: Cleveland, OH. Historic status: Not historic. Opened: 1966. Size: About 1.5 million gross square feet in 33 floors and a partial mezzanine level above ground level, and a cafeteria level, a basement, and a subbasement below ground level. Major tenant(s): The Defense Finance and Accounting Service, Internal Revenue Service, and Department of Veterans Affairs. Number of federal employees: Over 3,500. Architecture: One of the better examples of architecture characteristic of the “Great Society Buildings.” Estimated Cost of Needed Repairs: $128.1 million in addition to about $1.5 million it received for design in fiscal year 2001. Date When Needs Were Documented: 1995. Energy and maintenance costs are higher. Water leaking from air conditioning units in offices rusts the building’s metal inner skin, which holds the building’s exterior panels. Temperature control is limited and air temperature is uneven throughout the building. A potential health hazard exists because water leaks and condensation in the office units facilitate the growth of molds and mildews that may cause sickness in sensitive individuals if the substances are released into the air. A safety hazard would occur if a building panel falls. This happened in 1993. Although GSA has taken steps to better secure the exterior panels, the problem will continue until the water infiltration problem is corrected and the hardware and structure no longer rust. A leak caused an electrical fire that shut down a portion of the electrical system and needed repairs that cost $80,000. Water infiltration causes structural deterioration. Maintenance costs are higher because recurrent cosmetic repairs are needed to correct the damage while the cause of the damage—leaks—is not repaired. Stored materials—tenant agency supplies—have been damaged. The system could fail because of the system’s age and associated deterioration. Maintenance costs are higher because of the increased incidence of minor problems. A safety danger could result if the system becomes overloaded. The existing system could have difficulty accommodating 21st century technology. Energy costs are higher. Safety hazard if asbestos is disturbed and released into the air. Location: Seattle, WA. Historic status: Not historic. Opened: 1974. Size: About 820,000 gross square feet in 36 floors and a basement. Major tenant(s): The Internal Revenue Service, Coast Guard, Department of Education, and Department of Veterans Affairs. Number of federal employees: About 2,400. Architecture: Skyscraper. Estimated Cost of Needed Repairs: $45.5 million in addition to about $1.7 million it received for design in fiscal year 2000. Date When Needs Were Documented: 1995. The building could incur significant damage and threaten the life and safety of building occupants during an earthquake. The elevator could experience significant shaking or a free fall during an earthquake. The system is a safety hazard. One rider has tripped and been injured when an elevator did not stop at floor level. Operating costs are higher. This building condition lowers tenant satisfaction. The floors and walls in the basement and parking garage have been damaged. Maintaining current window shades is expensive and difficult. The current shades result in higher heating and cooling costs. This building condition lowers tenant satisfaction. Location: Dallas, TX. Historic status: Cabell is not historic, Santa Fe is on the National Register of Historic Places. Opened: 1971 and 1925 respectively. Size: Combined total of about 1.4 million gross square feet in 16 floors, a basement, and subbasement; and 19 floors, a basement, and an attic, respectively. Major tenant(s): The Departments of Justice, Agriculture, and the Treasury; Internal Revenue Service, U.S. Navy, and U.S. Federal Courts. Number of federal employees: About 3,000 combined. Architecture: Skyscrapers. Estimated Cost of Needed Repairs: $24.2 million in addition to about $1.4 million it received for design in fiscal year 2000. Date When Needs Were Documented: 1994. A potential life/safety problem exists because corroded sprinkler heads may not work, thus increasing danger to life and property during a fire. A potential life/safety problem exists because the ineffective placement of some sprinkler heads decreases their usefulness, thus increasing danger to life and property during a fire. The existing system could leak and damage property. A potential life/safety issue exists because some panels are loose or have shifted from their original position and could fall from building. Grime and exhaust have coated the building and detract from its appearance. The temperature throughout the building is inconsistent. Upgrading some parts of the current system is not cost effective. Energy use and cost are higher. The current system shortens the life of the HVAC equipment because the system has to run much more to get the desired temperature. Utility/energy costs are substantially higher. The decreased comfort results in lower tenant satisfaction.
The General Services Administration (GSA), the federal government's real property manager, it is responsible for identifying, funding, and completing needed repairs and alterations at federal buildings. This report examines (1) GSA's process for assessing and selecting prospectus-level major repair and alteration design projects for funding, (2) the obstacles that impede GSA from satisfying its repair and alteration requirements, and (3) the consequences associated with deferring needed repairs and alterations at selected buildings. GAO found that in fiscal year 2001, GSA assessed the merits of 27 prospectus-level repair and alteration design projects and recommended 12 for funding. These projects were selected by a multifaceted process that relied on empirical data and professional judgment coupled with specific selection criteria and computer analysis that compared competing projects. GSA explained its decisions when it recommended lower-ranked projects for repairs. However, because of insufficient funding, those projects were placed on GSA's growing repair and alteration inventory. GSA has faced long-standing obstacles, including inadequate program data, the lack of a multiyear repair and alteration plan, and limited funding, in reducing this multibillion-dollar inventory. In addition, funding limitations remain a major obstacle. Delaying or not performing needed repairs and alterations may have serious consequences, including health and safety problems.
The GAT Board, the Recovery Board, and OMB, have initiatives under way to improve the accuracy and availability of federal spending data. The GAT Board, with a mandate of providing strategic direction, has four working groups charged with developing approaches for improving the quality of data in federal contract, grants, and financial management systems, and for expanding the availability of these data to improve oversight of federal funds. The working groups represent the federal procurement, grants, financial management, and oversight communities and include interagency forums such as the Chief Acquisition Officers Council and the Council for Inspectors General for Integrity and Efficiency. (See appendix I for more information on the GAT Board working groups.) For example, the GAT Board established the Procurement Data Standardization and Integrity Working Group to develop approaches that ensure that contracting data are accurate and contract transactions can be tracked from purchase order through vendor payments. The GAT Board selected DOD to lead this effort in order to leverage its long-standing efforts to increase the accuracy of contract data submitted to the Federal Procurement Data System-Next Generation (FPDS-NG). Through these working groups, the GAT Board has begun to develop approaches to (1) standardize data elements across systems; (2) link financial management systems with award systems so that spending data can be reconciled with obligations; and (3) use the data to help identify and reduce fraud, waste, and abuse. However, the GAT Board’s mandate does not provide it with the authority to implement these reforms; therefore, it must rely on its working groups’ lead agencies to implement approaches that it has approved. Moreover, the GAT Board has no dedicated funding, so its strategic plan is short-term and calls for an incremental approach that builds upon ongoing agency initiatives. We found that standardizing data and having a uniform convention for indentifying contract and grant awards throughout their life cycle are the first steps in ensuring data quality and tracking spending data. Without this uniformity, reporting and tracking spending data is inefficient and burdensome. Current efforts are focused on identifying approaches to standardize contract and grant award data elements to improve data accuracy, and to date some progress has been made, such as: Based in part on work of the GAT Board for the Federal Acquisition Regulatory Council, DOD proposed a regulation requiring federal agencies to use a uniform procurement identifier—a number that could be attached to a contract so it can be tracked across various systems throughout the procurement process. OMB, working with the GAT Board, issued new guidance that requires all federal agencies to establish unique identification numbers for financial assistance awards. While this guidance could help bring greater consistency to grant award data, it only requires agencies to assign award numbers unique within their agency and thus does not provide the same level of uniformity as is required for contracts. OMB has noted that standardizing an identifier format could cause problems for agency systems because some agencies structure their award identifiers to track particular characteristics of grants for their internal use. Through its work with the GAT Board, HHS examined more than 1,100 individual data elements used by different agencies and found wide variation in terminology and associated definitions that impacted how spending was captured, tracked, and reported. The Recovery Board recently concluded its Grant Reporting Information Project that tested the feasibility of using the website FederalReporting.gov to collect data on non-Recovery Act grant expenditures. The Recovery Board’s analysis of the project supported using FederalReporting.gov for grant reporting and validated the effectiveness of using a universal award identifier. The GAT Board is also building on Treasury’s effort to integrate financial management systems to track spending better. Its Financial Management Working Group is developing recommendations for a work plan that will seek to leverage Treasury’s on-going transparency and system modernization efforts. For example, the board is building on Treasury’s initiative to standardize payment transaction processes, which will consolidate more than 30 agency payment systems into a single application. This application will process agency payment requests using a standardized payment request format, which all agencies that use Treasury disbursing services will be directed to use by October 1, 2014. The GAT Board also intends to leverage Treasury’s plans to develop a centralized repository with detailed and summarized records of payment transactions from all federal agencies including payments reported by the federal agencies that disburse their own payments. The Payment Information Repository will contain descriptive data on payments that can be matched with other data to provide additional information regarding the purpose, program, location, and commercial recipient of the payment. A third area on which federal transparency efforts have focused is on using existing data to enhance spending oversight. Data mining applications are emerging as essential tools to inform management decisions, develop government-wide best practices and common solutions, and effectively detect and combat fraud in federal programs. For example, predictive analytic technologies can identify fraud and errors before payments are made, while data-mining and data-matching techniques can identify fraud or improper payments that have already been awarded. The Recovery Board’s Recovery Operations Center (ROC) uses data analytics to monitor Recovery Act spending and has provided several inspectors general with access to these tools. ROC staff were able to notify agencies that they had awarded Recovery funds to companies that were debarred. ROC analysts also found hidden assets that resulted in a court ordering the payment of a fine, and indentified several individuals employed by other entities while receiving worker’s compensation benefits. The GAT Board’s Data Analytics Working Group has set a goal of expanding on the ROC’s work to develop a shared platform for improving fraud detection in federal spending programs. This approach relies on the development of data standards. It will provide a set of analytic tools for fraud detection to be shared across the federal government. Although this work is just starting, working group members have identified several challenges including reaching consensus among federal agencies on a set of common data attributes to be used and obtaining changes needed to existing privacy laws to allow access to certain types of protected data and systems. A forum we co-hosted in January 2013, along with the Council of the Inspectors General on Integrity and Efficiency and the Recovery Board, explored these challenges and identified next steps to address them. Forum participants identified a range of challenges, including a lack of data standards and a universal award identifier that limit data sharing across the federal government and across federal, state, and local agencies. Working groups or other structures have been formed to forward these issues. For example, we are leading a community of practice for federal, state, and local government officials to discuss challenges and opportunities related to data sharing within and across government agencies. In many cases, the transparency initiatives of the GAT and Recovery Boards, OMB, and key federal agencies build on lessons learned from the operation of existing transparency systems. But as new transparency initiatives get under way, we believe there are opportunities to give additional consideration to these lessons to help ensure new transparency programs and policies are implemented successfully. First, we found that in implementing the Recovery Act, OMB directed recipients of covered funds to use a series of standardize data elements and report centrally into the Recovery Board’s reporting web site. The transparency envisioned under the Recovery Act required the development of a system that could quickly trace billions of dollars disbursed to thousands of recipients, across a variety of programs. Agencies had systems in place that captured such information as award amounts, funds disbursed, and, to varying degrees, progress being made by recipients. However, the lack of uniform federal data and reporting standards made it difficult to obtain these data from federal agencies. Because agencies did not collect spending data in a consistent manner, the most expedient approach for Recovery Act reporting was to collect data from fund recipients, which placed additional burden on them to provide these data. Federal fund recipients we spoke to said that the lack of consistent data standards and commonality in how data elements are defined and reported places undue burden on them because it can result in having to report the same information multiple times and requires recipients to enter data manually, which can impact the accuracy of the data. For example, a nonprofit group representative who participated in one of our focus groups said that they had to report the same information through 15 different reporting platforms, so having data standards and single reporting platform would make the reporting process more efficient. Given the longer time frames to develop current transparency initiatives, OMB and the GAT Board are working toward greater data consistency by focusing on data standards. Citing agency budgetary constraints and the potential of emerging technologies for extracting nonstandard data elements from disparate systems, the GAT Board and OMB are taking incremental steps toward increasing data standardization. Their plans, however, do not include long-term steps, such as working toward uniform award identifiers that would improve award tracking with less burden on recipients. Second, we found that early in the development of both the Recovery Act reporting system and its procedures, federal officials listened to the concerns of recipients and made changes to guidance in response, which helped ensure they could meet those requirements. Given the daunting task of rapidly establishing a system to track billions of dollars in Recovery Act funding, OMB and the Recovery Board implemented an iterative process which allowed many stakeholders to provide insight into the challenges that could impede their ability to report Recovery Act expenditures. Federal fund recipients we spoke with stressed the importance of having a formal mechanism to provide feedback to the federal government as guidance is crafted and before new transparency reporting requirements are established to ensure that the guidance is clear and understandable. Such guidance will ensure that the data they report are accurate, on time, and minimally burdensome. Although, the GAT Board has implemented a structure that leverages the expertise of federal officials with in-depth knowledge of federal procurement, grant- making, and financial management operations, the board does not have any formal mechanisms, other than the federal rule-making process, to obtain input from non-federal fund recipients. As we learned through our work examining Recovery Act implementation, without similar outreach under the current initiatives, reporting challenges may not be addressed, potentially impairing the data’s accuracy and completeness, and increasing burden on those doing the reporting. Third, we found that the under the Recovery Act, specific requirements and responsibilities for transparency were clearly laid out in statute, which provided unprecedented transparency and helped to ensure that the act’s transparency requirements were implemented within tight time frames. The Recovery Act specified the timing of reporting, including its frequency and deadlines, and the items that needed to be included in the reporting. The act also required the Recovery Board to conduct and coordinate oversight of the funds and to deploy a data-collection system and a public-facing website to provide spending data to the public. Unlike the GAT Board, the Recovery Board had funding which was used to provide staff and resources for developing and operating its data collection system, website, and oversight activities. In contrast, authority for implementing the current transparency initiatives is not as clearly defined and authority for expanding transparency is centered in an executive order rather than legislation. An official from an association representing federal fund recipients told us that of clear reporting guidance was essential for ensuring compliance with reporting requirements, especially for recipients with limited resources. Moreover, unlike under the Recovery Act, new transparency initiatives are being funded through existing agency resources using agency personnel, as separate funding is unavailable. As, we have previously reported, given the importance of leadership to any collaborative effort, transitions and inconsistent leadership, which can occur as administrations change, can weaken the effectiveness of any collaborative efforts, and result in a lack of continuity. We found that the GAT Board’s vision for comprehensive transparency reform will take several years to implement, and therefore, continuity of leadership becomes particularly important. Going forward, without clear, legislated authority and requirements, the ability to sustain progress and institutionalize transparency initiatives may be jeopardized as priorities shift over time. In our recently released report, we recommended that OMB and the GAT Board develop a long-term strategy for implementing data standards across the federal government and for obtaining input from federal fund recipients. Specifically, we recommended that the Director of OMB, in collaboration with the members of the GAT Board, take the following two actions: Develop a plan to implement comprehensive transparency reform, including a long-term timeline and requirements for data standards, such as establishing a uniform award identification system across the federal government. Increase efforts for obtaining input from stakeholders, including entities receiving federal funds, to address reporting challenges, and strike an appropriate balance that ensures the accuracy of the data without unduly increasing the burden on those doing the reporting. The GAT Board, OMB and other cognizant agencies generally agreed with our recommendations and identified actions underway or planned, which they believe will operationalize comprehensive transparency reforms and help them obtain stakeholder input. Our recently issued report also suggested that Congress could consider legislating transparency requirements and establish clear lines of authority to ensure that recommended approaches for improving spending data transparency are implemented across the federal government. Among other things, this will ensure effective decision making and the efficient use of resources dedicated to enhancing the transparency of federal spending data. Chairman Warner, Ranking Member Ayotte, and Members of the Task Force, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-6806 or czerwinskis@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Carol L. Patey, Assistant Director and Kathleen M. Drennan, Ph.D., Analyst-in-Charge. Additional contributions to our detailed report were made by Gerard S. Burke, Patricia Norris, Cynthia M. Saunders, Ph.D., Robert Robinson, Jessica Nierenberg, Judith Kordahl, and Keith O’Brien. The Government Accountability and Transparency Board (GAT Board) is composed of the following 11 members designated by the President from among agency inspectors general, agency chief financial officers or deputy secretaries, and senior officials from OMB. The President designates a chairman from among the members. Director, Defense Procurement and Acquisition Policy, U.S. Assistant Secretary, Department of the Treasury Deputy Secretary, U.S. Department of Veterans Affairs Assistant Secretary for Financial Resources and Chief Financial Department of Defense Inspector General, U.S. Postal Service Inspector General, U.S. Department of Energy Inspector General, National Science Foundation Inspector General, U.S. Department of Health and Human Services Deputy Controller, Office of Management and Budget Officer, U.S. Department of Health and Human Services Inspector General, U.S. Department of Transportation Inspector General, U.S. Department of Education The GAT Board established four working groups, as shown in table 1.
The federal government spends more than $3.7 trillion annually, with more than $1 trillion awarded through contracts, grants, and loans. Improving transparency of this spending is essential to improve accountability. Recent federal laws have required increased public information on federal awards and spending. This testimony is based on GAO's recently issued report GAO-13-758 . It addresses (1) the status of transparency efforts under way and (2) the extent to which new initiatives address lessons learned from the Recovery Act. GAO reviewed relevant legislation, executive orders, OMB circulars and guidance, and previous GAO work, including work on Recovery Act reporting. GAO also interviewed officials from OMB, the GAT Board, and other federal entities; government reform advocates; associations representing fund recipients; and a variety of contract and grant recipients. Several federal entities, including the Government Accountability and Transparency Board (GAT Board), the Recovery Accountability and Transparency Board (Recovery Board), and the Office of Management and Budget (OMB), have initiatives under way to improve the accuracy and availability of federal spending data. The GAT Board, through its working groups, developed approaches to standardize key data elements to improve data integrity; link financial management systems with award systems to reconcile spending data with obligations; and leverage existing data to help identify and reduce fraud, waste, and abuse. With no dedicated funding, GAT Board plans are incremental and leverage ongoing agency initiatives and resources designed to improve existing business processes as well as improve data transparency. These initiatives are in an early stage, and some progress has been made to bring greater consistency to contract and grant award identifiers. The GAT Board's mandate is to provide strategic direction, not to implement changes. Further, while these early plans are being developed with input from a range of federal stakeholders, the GAT Board and OMB have not developed mechanisms for obtaining input from non-federal fund recipients. Lessons from implementing the transparency objectives of the Recovery Act could help inform these new initiatives: Standardize data to integrate systems and enhance accountability. Similar to the GAT Board's current focus on standardization, the Recovery Board recognized that standardized data would be more usable by the public and the Recovery Board for identifying potential misuse of federal funds. However, reporting requirements under the Recovery Act had to be met quickly. Because agencies did not collect spending data in a consistent manner, the most expedient approach was to collect data from fund recipients, even though similar data already existed in agency systems. Given the longer timeframes to develop current transparency initiatives, OMB and the GAT Board are working toward greater data consistency by focusing on data standards. Their plans, however, do not include long-term steps, such as working toward uniform award identifiers, that would improve award tracking with less burden on recipients. Obtain stakeholder involvement as reporting requirements are developed. During the Recovery Act, federal officials listened to the concerns of recipients and made changes to guidance in response, which helped ensure they could meet those requirements. Without similar outreach under the current initiatives, reporting challenges may not be addressed, potentially impairing the data's accuracy and completeness, and increasing burden on those reporting. Delineate clear requirements and lines of authority for implementing transparency initiatives. Unlike the present efforts to expand spending transparency, the Recovery Act provided OMB and the Recovery Board with clear authority and mandated reporting requirements. Given this clarity, transparency provisions were carried out successfully and on time. Going forward, without clear, legislated authority and requirements, the ability to sustain progress and institutionalize transparency initiatives may be jeopardized as priorities shift over time. In its report GAO recommended that the director of OMB, with the GAT Board, develop a long-term plan to implement comprehensive transparency reform, and increase efforts for obtaining stakeholder input to ensure reporting challenges are addressed. Further, Congress should consider legislating transparency requirements and establishing clear authority to implement these requirements to ensure that recommended approaches for improving transparency are carried out across the federal government. The GAT Board, OMB and other cognizant agencies generally concurred with GAO's recommendations.
The President’s Vision for Space Exploration for NASA announced in 2004 calls for the retirement of the shuttle upon completion of the ISS and the creation of new vehicles for human space flight that will allow a return to the moon by 2020 and voyages to Mars and points beyond. The shuttle manifest currently consists of 16 flights—15 to complete assembly and integration of the ISS and a servicing mission to the Hubble Space Telescope. The first new space vehicles currently are targeted to begin operating no later than 2014—thereby creating a potential gap in U.S. human space flight. Congress has voiced concern over the United States not having continuous access to space. NASA has made it a priority to minimize the gap to the extent possible. NASA has begun planning for the retirement of the shuttle, scheduled for 2010, by identifying best practices in closing facilities and the transitioning of capabilities. Specifically, NASA has conducted a number of benchmarking studies of previous closures and realignment of large programs, including the Titan IV rocket fly-out, the F/A-18 C/D fighter production close, and the Navy Base Realignment and Closure activities. The benchmarking efforts have highlighted to NASA the importance of having a plan, effective communication, human capital management, and effective program management tools. NASA’s benchmarking effort also showed that closing and transitioning facilities, equipment, and people is expensive and time consuming. Among the lessons learned is that, historically, it has taken 3.5 years to close down an installation and another 3 years to complete the transition of the property. NASA’s Office of the Inspector General has recently reviewed NASA’s plan for the space shuttle transition and recommended, among other improvements, that the two affected space directorates finalize and implement the Human Space Flight Transition Plan. Development of the Orion crew capsule, Ares I launch vehicle, and other exploration systems needed to implement the Vision is dependent on a “go as you can afford to pay” approach, wherein lower-priority efforts will be deferred, descoped, or discontinued to allow NASA to stay within its available budget profile. In recent testimony, the NASA Administrator said that the cost associated with returning the shuttle to flight, continued shuttle operations, and recent budget reductions had the combined effect of increasing the gap by delaying the first manned Orion test flight by 6 months. In an effort to address the gap in U.S. capability to resupply the space station following retirement of the shuttle, NASA is investing in commercial space transportation services. NASA’s expectation is that by acquiring domestic orbital transportation services it will be able to send cargo and, in the future, transport crews to the ISS in a cost-effective manner. NASA refers to this as the Commercial Orbital Transportation Services project. The project is in the early stages of development. Should these commercial services prove to be unreliable or more costly than anticipated, NASA will need to purchase space transportation from its international partners to meet obligations to the ISS until the new Orion spacecraft become operational. We have undertaken a substantial body of work over the past 3 years that has highlighted the significant challenges that NASA will face as it retires the shuttle and transitions to exploration activities. One key challenge is sustaining the shuttle workforce through the retirement of the shuttle while ensuring that a viable workforce is available to support future activities. Another key challenge will be developing the Orion Crew Exploration Vehicle within cost, schedule, and performance goals. Additionally, our ongoing work has identified a number of other areas that may present challenges during the transition period. Some of these challenges include managing the supplier base to ensure its continued viability, developing the Ares I Crew Launch Vehicle, and completing and supporting the space station. The Space Shuttle Program’s workforce is critical to the success of the Vision. The shuttle workforce currently consists of approximately 2,000 civil service and 15,000 contractor personnel, including a large number of engineers and scientists. In 2005, we reported that NASA had made limited progress toward developing a detailed strategy for sustaining a critically skilled shuttle workforce to support space shuttle operations. We reported that significant delays in implementing a strategy to sustain the shuttle workforce would likely lead to larger problems, such as funding and failure to meet NASA program schedules. Accordingly, we concluded that timely action to address workforce issues is critical given their potential impact on NASA-wide goals such as closing the gap in human spaceflight. When we performed our work several factors hampered the ability of the Space Shuttle Program to develop a detailed long-term strategy for sustaining the critically skilled workforce necessary to support safe space shuttle operations through retirement. For example, at that time, the program’s focus was on returning the shuttle to flight, and other efforts such as determining workforce requirements were delayed. In our report, we recommended that NASA begin identifying the Space Shuttle Program’s future workforce needs based upon various future scenarios. Scenario planning could better enable NASA to develop strategies for meeting future needs. NASA concurred with our recommendation. It has acknowledged that shuttle workforce management and critical skills retention will be a major challenge for the agency as it progresses toward retirement of the space shuttle and has taken action to address this issue. For example, since we made our recommendation, NASA has developed an agencywide strategic human capital plan and developed workforce analysis tools to assist it in identifying critical skills needs. NASA has also developed a human capital plan specifically for sustaining the shuttle workforce through the retirement and, then transitioning the workforce. Additionally, in March 2006, the Senate Appropriations Subcommittee on Commerce, Justice, Science, and Related Agencies, and NASA asked the National Academy of Public Administration (NAPA) to assist the agency in planning for the space shuttle’s retirement and transition to future exploration activities. In February 2007, a NAPA panel recommended that the Space Shuttle Program adopt a RAND model for projecting a core workforce because of its emphasis on “long-term scheduling projections, quantification of core competencies and proficiencies, and analysis of overlapping mission needs.” Under the RAND model, an organization maintains a core capability for any competency that will be needed in the future. According to NAPA, this model is useful where a given expertise is not immediately required, but is likely to be needed in the future—in this case, for the Orion Crew Exploration Vehicle. In July 2006, we reported that NASA’s acquisition strategy for the Orion Crew Exploration Vehicle placed the project at risk of significant cost overruns, schedule delays, and performance shortfalls because it committed the government to a long-term contract before establishing a sound business case. Our past work has shown that developing a sound business case—one that matches requirements to available and reasonably expected resources before committing to a new product development effort—reduces risk and increases the likelihood of successful outcomes. For a program to increase its chances of success, high levels of knowledge should be demonstrated before significant commitments are made (i.e., they should be following a knowledge-based approach to product development). At the time of our report, NASA had yet to develop key elements of a sound business case, including well-defined requirements, mature technology, a preliminary design, and firm cost estimates that would support its plans for making a long-term commitment. Without such knowledge, NASA cannot predict with any confidence how much the program will cost, what technologies will or will not be available to meet performance expectations, and when the vehicle will be ready for use. NASA acknowledged that it would not have these elements in place until the project’s Preliminary Design Review scheduled for fiscal year 2008. As a result, we recommended that the NASA Administrator modify the agency’s acquisition strategy for the Orion Crew Exploration Vehicle to ensure that the agency does not commit itself, and in turn the federal government, to a long-term contractual obligation prior to establishing a sound business case at the project’s Preliminary Design Review. Although it initially disagreed with our recommendation, NASA subsequently took steps to address some of the concerns we raised. Specifically, NASA modified its acquisition strategy for the Orion project and changed the production and sustainment portions of the contract into options. The agency will decide whether to exercise these options after the project’s critical design review in 2009. While these changes are in line with our recommendation and a step in a positive direction, we continue to believe NASA’s acquisition strategy is risky because it does not fully conform to a knowledge-based acquisition approach. Attempting to close that gap by pushing forward development of the Orion Crew Exploration Vehicle without first obtaining the requisite knowledge at key points could very well result in the production of a system that not only does not meet expectations but ends up costing more and actually increases the gap. Since we last testified on this subject in September 2006, NASA has successfully completed its first major milestone for the Orion project. It has completed the Systems Requirements Review. This was a major step toward obtaining the information critical for making informed decisions. According to NASA’s Orion contracting officer, NASA is also in the process of renegotiating the Orion contract to extend the Initial Operational Capability date of the system to 2014. Further, while this change will increase contract costs, the increase has already been accounted for in the Orion budget because the agency has been planning the change for over a year. In addition, risks associated with schedule, cost, and weight continue to be identified for the Orion project. As we have previously testified, sound project management and oversight will be key to addressing the risks that remain for the Orion project as it proceeds with its acquisition approach. To help mitigate the risks, we have recommended in the past that NASA have in place markers (i.e., criteria) to assist decision makers in their monitoring of the project at key junctures in the development process. Such markers are needed to provide assurance that projects are proceeding with and decisions are being based upon the appropriate level of knowledge and can help to lessen project risks. NASA has recently issued its updated program and project management requirements for flight systems in response to our recommendation. Changes to the policy, including the incorporation of key decision points throughout the project development life cycle, should provide an avenue for decision makers to reassess project decisions at key points in the development process to ensure that continued investment is appropriate. However, it should be noted that implementation of the policy in a disciplined manner will ensure success, not the existence of the policy itself. Currently, we are evaluating the development of NASA’s latest human- rated launch vehicle—the Ares I Crew Launch Vehicle. When completed, the Ares I vehicle will be capable of delivering the Orion spacecraft to low earth orbit for ISS missions and for exploration missions to the moon. As initially conceived by NASA in the Exploration Systems Architecture Study completed in 2005, the Ares I design would rely on the existing solid rocket boosters and main engines from the space shuttle as major components of its two stages. The current design for the Ares I, however, diverges from the initial design set forth in the architecture study and now includes elements from the Apollo-era Saturn V launch vehicle. Current plans are for Ares I to evolve the solid rocket boosters from the Space Shuttle Program from four segments to five segments and to build a new upper-stage engine based on an original Saturn V design. NASA maintains that these changes are necessary to increase commonality between the Ares I and the planned Ares V cargo launch vehicle and to reduce overall development costs for implementing the Vision. As NASA’s design for the Ares I continues to evolve, careful planning and coordination between the Orion and Ares I development teams will be critical to ensuring that current developmental efforts result in hardware that satisfies the future requirements of these systems. Subsequently, any development problems on either of these systems could result in increasing the gap. Our ongoing work is aimed at assessing whether NASA’s acquisition strategy for Ares I reflects the effect of changes to the Ares I design incorporated since the Ares I was first conceived in the Exploration Systems Architecture Study as a shuttle-derived alternative. Also, we are evaluating the extent to which NASA’s Ares I acquisition strategy incorporates knowledge-based concepts designed to minimize technical and programmatic risk. The Orion Crew Exploration Vehicle and the Ares I Crew Launch Vehicle are the first in a series of new systems to be developed in support of exploration activities. NASA’s careful management of these projects must preclude historical instances of cost and schedule growth. Indeed, while NASA has had many successes in the exploration of space, such as landing the Pathfinder and Exploration Rovers on Mars, NASA has also experienced its share of unsuccessful missions, unforeseen cost overruns, and difficulty bringing a number of projects to completion. For example, NASA has made several attempts to build a second generation of reusable human spaceflight vehicle to replace the space shuttle, such as the National Aero-Space Plane, the X-33 and X-34, and the Space Launch Initiative, that never accomplished its objective of fielding a new reusable space vehicle. We estimate that these unsuccessful development efforts have cost approximately $4.8 billion since the 1980s. The high cost of these unsuccessful efforts and the potential costs of implementing the Vision make it important that NASA achieve success in developing new systems for its new exploration program. NASA’s plans to retire the shuttle have the potential to greatly impact the supplier base that has been supporting that program for the last several decades, as well as mold the future supplier base needed for its exploration program. Over the next few years, NASA will be making decisions about its supplier base needs, including which suppliers will be required for the remainder of the Space Shuttle Program, which will no longer be required for the program, and which will be needed to support exploration efforts. One concern is that NASA will be unable to sustain suppliers necessary to support the exploration program during the period between the shuttle’s retirement and resumption of human space flight. Also of concern is that those suppliers determined by NASA as not needed for the exploration program will prematurely end their services, thus jeopardizing the safe and efficient completion of shuttle activities. In addition, issues such as obsolescence—already being experienced by some shuttle projects—could have an impact on the exploration program given the planned use of heritage hardware for some components of the Constellation projects. In an attempt to address these potential issues, NASA has been developing and implementing plans and processes to manage the transition of its supplier base. We are in the process of assessing how well NASA is positioning itself to effectively manage its supplier base to ensure both sustainment of the Space Shuttle Program through its scheduled retirement in 2010 and successful transition to planned exploration activities. The shuttle is uniquely suited for transporting crew and cargo to and from the ISS. However, with scheduled retirement of the shuttle in 2010, NASA and its international partners will be challenged to fully support ISS operations until 2014, when the new crew exploration vehicle is scheduled to come on line. To fill this gap, NASA plans to rely on its international partners and commercial services to provide ISS logistics and crew rotation. Two recent studies have raised serious concerns about whether future ISS operations can be continuously supported. A 2006 report by the National Research Council noted that the capabilities, schedules, and funding requirements for NASA, international partners, and commercial cargo and crew vehicles were not yet firm enough to give the panel confidence that ISS exploration mission objectives have a high likelihood of being fulfilled. A February 2007 report by the International Space Station Independent Safety Task Force, which was required by the NASA Authorization Act of 2005, noted that the transition from the space shuttle to post-shuttle systems for logistical support to the ISS will require careful planning and phasing of new capabilities. Specifically, care must be taken to ensure adequate logistics and spares are provided to maintain a viable station. The task force report went on to say that if a commitment is made to an emerging logistics delivery capability and the capability does not materialize, then logistical support to the ISS could be lost for some time, seriously decreasing the utility of the space station and possibly resulting in its abandonment. We are reviewing NASA’s plans for meeting ISS logistics and maintenance requirements after the shuttle retires, identifying the main risks to meeting ISS logistics and maintenance requirements, and assessing NASA’s plans for addressing the risks. NASA has not developed a comprehensive cost estimate for transitioning or disposing of Space Shuttle Program facilities and equipment. This poses a financial risk to the agency. As NASA executes the remaining missions needed to complete the assembly of and provide support for the ISS, it will simultaneously begin the process of disposing of shuttle facilities and hardware that the Space Shuttle Program will no longer need, or, transitioning such facilities and hardware to the other NASA programs. As the ninth largest federal government property holder, NASA owns more than 100,000 acres, as well as over 3,000 buildings and 3,000 other structures totaling over 44 million square feet. Currently, the Space Shuttle Program uses 654 facilities valued in excess of $5 billion. The Space Shuttle Program also manages equipment dispersed across government and its contractors valued at more than $12 billion. NASA is in the process of evaluating its Space Shuttle Program facilities and equipment requirements and identifying existing facilities and equipment that will no longer be needed to support shuttle operations. Constellation and other NASA programs will determine whether they need any of the facilities or equipment released by the Space Shuttle Program. According to NASA officials, assessments currently project that only 70 to 80 of the existing facilities are needed to support the development or operation of future exploration systems. In cases where facilities or equipment are no longer required by the Space Shuttle Program, no other use is identified, or it is selected for disposal, it will transition to the resident NASA field center for disposition. It is worth noting that even before the retirement of the shuttle, over 10 percent of NASA’s facilities are underutilized or not utilized at all. One option NASA has is to lease underutilized facilities in exchange for cash and/or in-kind consideration, such as improvement of NASA’s facilities or the provision of services to NASA. As directed by the NASA Authorization Act of 2005, we recently reported on NASA’s Enhanced Use-Leasing Program. Congress authorized NASA to employ enhanced-use leasing at two demonstration centers. This allowed the agency to retain the proceeds from leasing out underutilized real property and to accept in-kind consideration in lieu of cash for rent. The act allows NASA to deposit the net proceeds (i.e., net of leasing costs) in a no-year capital account to use later for maintenance, capital revitalization, and improvement of the facilities, albeit only at the demonstration centers—Ames Research Center and Kennedy Space Center. However, unlike other agencies with enhanced-use leasing authority, NASA is not authorized to lease back the property during the term of the lease. Furthermore, we found that the agency does not have adequate controls in place to ensure accountability and transparency and to protect the government. We recommended that the NASA Administrator develop an agencywide enhanced use leasing policy that establishes controls and processes to ensure accountability and protect the government’s interests including developing mechanisms to keep the Congress fully informed of the agency’s enhanced use leasing activity. NASA concurred with our recommendations. After not receiving additional authority in the NASA Authorization Act of 2005, the agency is again requesting that the Congress extend enhanced use leasing authority to at least six NASA centers. NASA currently has other leasing authorities, but they require the agency to return to the U.S. Treasury any amounts exceeding cost. Further, NASA has indicated that it is preparing a package of legislative and administrative tools to help in the transition from the Space Shuttle Program to the Constellation Program. For example, in addition to requesting authority for increased use of enhanced use leasing, a NASA official informed us that one tool the agency might consider pursuing is the ability to keep the funds within NASA from the sale of facilities and equipment, rather than returning such funds to the Treasury. NASA does not have a comprehensive estimate of the environmental clean up costs associated with the transition and disposal of Space Shuttle Program facilities and equipment. The agency must comply with federal and state environmental laws and regulations, such as the National Environmental Policy Act of 1969, as amended, the Resource, Conservation, and Recovery Act of 1976, as amended, and the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, in identifying and mitigating the environmental concerns. Although NASA has an approach for identifying environmental risks, in our report on major challenges facing the nation in the 21st century, we pointed out that progress in cleaning up sites frequently does not meet expected time frames and the costs dramatically exceed available funding levels. For example, it cost the Titan IV program approximately $300 million over six years on cleaning facilities, equipment, and tools. At this time, the extent of the Space Shuttle Program’s environmental liabilities is not yet fully known. Paying for this liability may require a significant future outflow of funds at the same time that NASA will be facing many other competing demands for its limited dollars, such as development of Orion, Ares I, and other exploration projects. As it moves away from flying the shuttle, the NASA acknowledges that it must realign where necessary and plan for a workforce that will not be quite as large. NASA projects fewer resources will be required for operating and sustaining hardware, especially during vehicle processing and launch operations. The reduction in reusability of future space systems will also result in less refurbishing. In addition, as new space systems are designed, emphasis will shift to personnel with skills in systems development and engineering, program management and systems integration. Unfortunately, these skills will be in high demand at a time when other federal agencies and the private sector have similar needs. NASA projects that by fiscal year 2012 the total number of personnel needed to meet its strategic goals will decrease from 18,100 to 17,000. The agency is taking advantage of the flexibilities outlined in the NASA Flexibility Act of 2004 to attract highly qualified candidates, however, continued buy-outs and the threat of a reduction in force have created a feeling of instability among the science and engineering workforce. NASA’s senior leaders recognize the need for an effective workforce strategy in achieving mission success. NASA has a strategic human capital plan, but more work is needed in workforce planning and deployment. In addition, NASA’s transition to full cost accounting in fiscal year 2004 resulted in a number of its centers experiencing less than Full Time Equivalent utilization, a situation referred to by NASA as “uncovered capacity.” The Administrator has committed to operating and maintaining 10 centers and transferred work to those centers with identified uncovered capacity. We are examining whether several federal agencies, including NASA, are taking sufficient steps to address their workforce challenges in a timely and comprehensive manner, while sustaining focus on its mission and programmatic goals. Specifically, we are assessing the extent to which NASA’s human capital framework is aligned with its strategic mission and programmatic goals; whether NASA is effectively recruiting, developing, and retaining critically skilled staff; and what internal or external challenges NASA faces in achieving its workforce needs. As noted earlier, NAPA recently completed a study that made recommendations to NASA on how to achieve a flexible and scalable workforce by integrating its acquisition and workforce planning processes. Since 1990, GAO has designated NASA’s contract management as high risk principally because NASA has lacked a modern financial management system that can provide accurate and reliable information on contract spending and has placed little emphasis on product performance, cost controls, and program outcomes. NASA has made progress toward implementing a disciplined project management processes, but it has made only limited progress in certain areas such as reengineering NASA’s contractor cost reporting process. As we reported, the current Integrated Enterprise Management Program does not provide the cost information that program managers and cost estimators need to develop credible estimates and compare budgeted and actual cost with the work performed on the contract. NASA plans to spend billions of dollars to develop a number of new capabilities, supporting technologies, and facilities that are critical to enabling space exploration missions. The development of such capabilities will be largely dependent on NASA contractors—on which NASA spends about 85 percent of its annual budget. Because of such a large reliance on contractors to achieve its mission, it is imperative that NASA be able to track costs and the means to integrate financial decisionmaking with scientific and technical leadership by providing decisionmakers accurate information. To its credit, NASA is working to improve business processes and integrating disparate systems in order to improve efficiencies, reduce redundant systems, and improve business information available to the acquisition community and mission support organizations. However, more effort will be needed to make the cultural transformation a reality. The Vision for Space Exploration puts NASA on a bold new mission. Implementing the Vision over the coming decades will require hundreds of billions of dollars and a sustained commitment from multiple administrations and Congresses over the length of the program. How well NASA overcomes the transition challenges that we and others have identified will not only have an effect on NASA’s ability to effectively manage the gap in the U. S. human access to space, but also will affect the agency’s ability to secure a sound foundation of support for the President’s space exploration policy. Consequently, it is incumbent upon NASA to ensure that these challenges are being addressed in a way that establishes accountability and transparency to the effort. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For further information regarding this testimony, please contact Allen Li at (202) 512-4841 or lia@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff who made key contributions to this testimony include Greg Campbell, Richard Eiserman, Yanina Golburt, James L. Morrison, Jeffrey M. Niblack, Shelby S. Oakley, Jose A. Ramos, Sylvia Schatz, and John Warren. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
On January 14, 2004, the President announced a new Vision for space exploration that directs the National Aeronautics and Space Administration (NASA) to focus its efforts on returning humans to the moon by 2020 in preparation for future, more ambitions missions. Implementing the Vision will require hundreds of billions of dollars and a sustained commitment from multiple administrations and Congresses. Some of the funding for implementing exploration activities is expected to come from funding freed up after the retirement of the Space Shuttle, scheduled for 2010, and projected termination of U.S. participation in the International Space Station by 2016. Congress, while supportive of the effort has voiced concern over the potential gap in human space flight. In the NASA Authorization Act of 2005, Congress stated that it is the policy of the United States to have the capability for human access to space on a continuous basis. NASA has made it a priority to minimize the gap to the extent possible. GAO provides no recommendations in this statement. However, GAO continues to emphasize that given the Nation's fiscal challenges and NASA's past difficulty developing systems within cost, schedule, and performance parameters, it is imperative that the agency adequately manage this transition in a fiscally competent and prudent manner. NASA is in the midst of a transition effort of a magnitude not seen since the end of the Apollo program and the start of the Space Shuttle Program more than 3 decades ago. This transition will include a massive transfer of people, hardware, and infrastructure. Based on ongoing and work completed to-date, we have identified a number of issues that pose unique challenges to NASA as it transitions from the shuttle to the next generation of human space flight systems while at the same time seeking to minimize the time the United States will be without its own means to put humans in space. These issues include: sustaining a viable workforce; effectively managing systems development efforts; managing the supplier base; providing logistical support to the International Space Station; identifying and disposing of property and equipment; ensuring adequate environmental remediation; and transforming its business processes and financial management system. NASA already has in place many processes, policies, procedures and support systems to carry out this transition. However, successful implementation of the transition will depend on thoughtful execution and effective oversight. How well NASA overcomes some of the challenges we have identified will not only have an effect on NASA's ability to effectively manage the gap in the U.S. human access to space, but will also affect the agency's ability to secure a sound foundation for the President's space exploration policy.
The CPI measures the change in prices of a fixed market basket of goods and services purchased directly by urban consumers. These purchases are for food, clothing, shelter, fuels, transportation, medical care, entertainment, and other goods and services that people buy for day-to-day living. Only expenditures made by consumers are captured in the CPI. The CPI is used by the federal government, businesses, labor organizations, and private citizens. According to BLS, the CPI is used as an economic indicator of inflation; an escalator for wages, income payments, and tax brackets; and a deflator of selected economic statistical series. For example, through collective bargaining contract negotiations in 1996, 1.7 million workers had their wages raised on the basis of changes in the CPI. As a result of changes in prices as reported in the CPI in 1996, 43.5 million Social Security beneficiaries and 25.8 million food stamp recipients had their benefits increased for inflation in 1996. According to BLS, to construct the CPI, the prices of more than 94,000 items are collected each month (e.g., margarine sold in tubs, sticks, or squeeze bottles) and aggregated into 206 “item strata” (e.g., fats and oils). In making the monthly calculations, according to BLS, weights are used to give proportionate emphasis for price changes of one item in relation to other items in the CPI. According to BLS, two sets of weights are computed from different sources of information. BLS computes one set of weights from the Consumer Expenditure Survey (CEX) data. These weights, which are the focus of this report, are used to aggregate the 206 item strata into the overall index number for the CPI. In this report, we refer to this first set of weights as “expenditure weights.” The second set of weights is derived primarily from information taken from the Point-of-Purchase Survey (POPS). These weights, which we term “point-of-purchase weights” in this report, are used to combine the prices of the 94,000 items into the 206 item strata. In other words, the point-of-purchase weights are used to aggregate the prices of the individual items into the 206 item strata and provide the base to which the expenditure weights are applied to calculate the CPI. The two sets of weights are updated at different time intervals. BLS began to publish the CPI regularly in 1921 and has changed expenditure weights only when making major revisions to the CPI. These major CPI revisions occurred in 1940, 1953, 1964, 1978, and 1987; another revision is scheduled for 1998. BLS instituted the POPS in 1978. All of the point-of-purchase weights are scheduled to be updated over a 5-year period, according to BLS. The CPI is often referred to as a cost-of-living index and is used to reflect the cost of living to adjust, for example, federal income tax brackets and some federal payments. Although some elements of the CPI reflect cost-of-living concepts, the CPI was not designed to be a cost-of-living index. As usually defined, a cost-of-living index would be broader in coverage than an index that is based on consumer expenditures. BLS has said through the years that the CPI is not a cost-of-living index. To date, the federal government has not developed a comprehensive cost-of-living index. In 1961, the Price Statistics Review Committee (hereafter called the Stigler committee for its chairman, George Stigler) recommended that the conceptual framework of the CPI be modified to represent a cost-of-living index. It also supported comprehensive revision of CPI weights at least once every decade and suggested that the more volatile categories of CPI weights be updated at least once every 5 years. The BLS Commissioner in 1961 agreed that the CPI should be revised every 10 years. Although the Commissioner agreed with the suggestion to update CPI weights more often, he cited some obstacles that he thought, at the time, would preclude BLS from doing so. We discuss these obstacles later in this report in the section on BLS’ reasons for not updating expenditure weights more often. In reporting to Congress in December 1996, the Boskin commission said its overarching recommendation was that BLS establish a cost-of-living index as its objective in measuring consumer prices. The Boskin commission concluded that the CPI overstates inflation because of four sources of bias: substitution bias, new products bias, quality change bias, and new outlets bias. The commission further subdivided substitution bias into what it termed lower-level bias and upper-level bias. The lower-level bias concerns the aggregation of the prices of the individual items, and the upper-level bias concerns the 206 item strata, which are the subject of this report. To address upper-level substitution bias, the Boskin commission recommended that the fixed market basket CPI be abandoned and replaced with two new formulas that would enable the CPI to more closely reflect the cost of living. One formula, according to the commission Chairman, would be a true superlative index; the other formula would be a modified superlative index. A superlative index, by definition, would continually change the market basket to reflect current consumer spending. BLS has requested funding for fiscal year 1998 to continue the fixed market basket CPI and to publish a CPI with a superlative index formula in 2002. Since the fixed market basket CPI would continue to be published, the discussion on how frequently to update the expenditure weights is pertinent. To obtain opinions on updating the CPI expenditure weights more often, we asked two former BLS officials and eight others who were knowledgeable about the CPI how often the weights should be updated. The eight other individuals had conducted research in connection with the CPI: four of the eight individuals were members of the Boskin commission, two were academicians, one was employed by a major economic research institution, and one was a member of the Stigler committee. (App. I describes how we selected these eight individuals.) To obtain information on the practices followed by other industrialized countries in updating their consumer price indexes, which also addresses our first objective, we obtained information from BLS and from publications of the Organization for Economic Cooperation and Development and the Canadian government on how often G-7 countries update their CPIs. To estimate the cost to BLS of updating the expenditure weights for the CPI on a 5-year cycle, we asked BLS to provide us with certain actual and estimated cost data. We asked BLS to provide us with the costs associated with the 1987 revision and the projected costs for the 1998 revision. In addition, we asked BLS for its estimate of what the costs would have been to update the CPI in 1992 and its estimate of what the cost might be to update the CPI in 2003. We did not specify to BLS what assumptions to make or what items to include or exclude in estimating the costs for 1992, 1998, and 2003. We also did not evaluate the reasonableness of BLS’ assumptions or estimates. BLS provided us with costs for the 1987 revision and estimated costs for the 1998 revision and a 2003 update of the CPI. BLS suggested that the cost for a 1992 update could be derived by deflating the cost of the 2003 update, which we did with the Gross Domestic Product (GDP) price index. To estimate the dollar effect on the federal budget if the expenditure weights for the CPI were updated on a 5-year cycle, we obtained assistance from BLS and CBO, which analyzes budget-related issues and provides cost estimates for legislative proposals to Congress. We asked BLS to provide a range—an upper percentage point and a lower percentage point—of the possible change that would occur to the CPI with a 5-year update. We asked CBO to estimate the effect that a 5-year update of the CPI would have on the federal budget, assuming no other changes in tax or spending levels and no other changes in the economy. To do this, we asked CBO to use BLS’ lower and upper estimates of change and the midpoint of these estimates. To illustrate the effect of a 5-year update that would begin in 2003, we asked CBO to make projections for the years 2003 through 2007 as it normally would and then to do a reestimation after adjusting for the effects of a 5-year update. CBO’s policy is to provide projections for current and future years, but not to provide estimates for past years. For that reason, we estimated how the federal budget might have been affected if the expenditure weights had been updated in 1992, which was 5 years after the 1987 major revision. In making our estimates, we used CBO’s estimates for 1998 through 2007 to backcast to 1992. In doing so, we assumed that the trend that was used for the years 1998 through 2007 could be reasonably applied to the years 1993 through 1998. We discussed the methodology we used in making the estimates with CBO officials, and they said that the methodology we used and the results we obtained were reasonable. In connection with impact on the federal budget, as requested by your office, we asked the Chief Actuary of the Social Security Administration (SSA) to estimate the effect a change in the CPI would have on the average benefit paid to retired workers. To make the estimate, we asked SSA to use the midpoint of BLS’ range of possible change that would occur to the CPI with a 5-year update. Using that midpoint percentage point and economic assumptions used in the President’s Fiscal Year 1998 Budget, SSA estimated the change in the average monthly benefit check for retired workers, beginning with December 2003, which would be payable in January 2004, and continued through December 2007. To identify and assess why updates to the CPI weights have been spaced 10 years or so apart since 1940, we talked with present and past officials of BLS and obtained their views on the reasons for this spacing. We also reviewed the 1961 congressional testimony of a BLS Commissioner in which he addressed the subject of BLS’ timetable for revising the CPI. In our assessment, we (1) collected and analyzed information on past comparisons between indexes that applied old and new expenditure weights, (2) obtained information on how BLS collects its source data for the CPI, (3) reviewed BLS budget information to ascertain BLS’ plans for future changes in its indexes, and (4) compared the estimated costs and benefits of a 5-year update to place an update in practical perspective. In our assessment, most of the comparisons between indexes with old weights and indexes with new weights probably reflected differences that were not due to changes in the expenditure weights alone. Some of the indexes we used were produced by BLS for “overlap” periods. When major revisions to the CPI were made, BLS calculated two indexes for several months. One index used the weights that had been in effect before the revision, and the second index used the new weights that were created for the revision. However, during a revision, many factors can and do change. For example, the geographic locations where data are collected are changed to some extent as are the items of goods and services in the market basket. Therefore, the differences that may result from comparing the two indexes may be due to several factors, and the effects of changes to the expenditure weights cannot be isolated from the effects of other changes to the index data. With this knowledge, we treat the differences as indicators of the effects of an update of the expenditure weights because an update of the weights is unlikely to occur in isolation from the other factors that are associated with revisions. For example, a 1992 update would have most likely incorporated a market basket that was based on different geographic areas than the areas that were used in the 1987 revision because, in 1986, changes were made in the geographic locations where expenditure data were collected. Such geographic changes are associated with major revisions. In addition to overlap studies, we examined the effect of the age of weights with indexes that were calculated with alternative base-year periods. For example, comparisons were made of the official CPI’s 3-year base of 1982 through 1984 with alternative 3-year base periods (i.e., 1987 through 1989). In these and other comparisons, we applied an economic concept that is based upon economic literature that suggests that an index is more accurate if the expenditure weights used to compute it represent, as much as practical, current consumer spending. However, in our review of economic literature, we did not identify any theoretical guidance on how often (e.g., 5 years as compared with 10 years) expenditure weights should be updated. (See app. I for more information about our objectives, scope, and methodology.) As previously reported in this section, this report includes, and often relies on, estimates and comparisons prepared by BLS, CBO, or SSA. We did not verify the computerized data that the agencies used in producing these estimates and comparisons. Verification, in our opinion, would have been impractical because it would have been costly and time consuming. In addition, the estimates and comparisons were within the scope of activities that BLS, CBO, and SSA normally perform. Therefore, we used their estimates and comparisons. The results we obtained are intended to contribute to the discussion of how often the CPI should be updated, but they are not intended to represent all future effects of shortening the updating cycle. Neither is our work intended to evaluate a change in the basic formula that could address substitution bias in the CPI. The point of shortening the updating cycle would be to have the CPI reflect, as closely as practical, the current spending patterns of consumers, regardless of whether the index is pushed upward or downward. We did our work in Washington, D.C., between November 1996 and July 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Labor, the Chair of the Council of Economic Advisers (CEA), the Director of the Office of Management and Budget (OMB), and the Chair of the Board of Governors of the Federal Reserve System or their designees. Comments by BLS, CEA, OMB, and the Federal Reserve are discussed near the end of this letter and are reproduced in appendixes V through VIII. We spoke with 10 individuals who were former officials of BLS or who had otherwise studied the CPI, and they were unanimous in stating that 10 years between updates of the expenditure weights was too long. However, there was less agreement among the individuals on exactly how often the updating should occur. According to information obtained from BLS and international publications, seven major industrial countries have consumer price indexes but, among them, only the United States updates its CPI as infrequently as once a decade. Two former BLS officials told us that updating the weights about every 5 years was about right. One official told us that the POPS should be rotated more frequently, which would affect point-of-purchase weights. He also advocated a different method of aggregating CEX data to develop the expenditure weights for the 206 item strata. The other former official, a previous BLS Commissioner, noted that doing an update more frequently than every 5 years would be too often. We also spoke with a former member of the Stigler committee and four members of the Boskin commission. The former Stigler committee member said that updating the CPI only every 10 years was entirely too infrequent. However, he gave low priority to updating the expenditure weights more often because he believed that getting new consumer items into the CPI and accounting for product improvement were more important. The four members of the Boskin commission also said that the expenditure weights for the market basket should be updated more frequently than every 10 years. However, the members regarded more frequent updating as only one step to improving the CPI. The Boskin commission recommended abandoning the fixed market basket aspect of the CPI and adopting a true superlative index formula and a modified superlative index formula to account for changing market baskets. The spirit of the Boskin commission’s recommendations, according to its Chairman, was for the CPI to be more current in order to reflect what is occurring in the economy. He said that if there were no change in existing products or no new products in the economy, then updating the expenditure weights would be the only step that would need to be taken. However, the economy is changing, with new products and product improvements occurring constantly. Therefore, more frequent updating was only a step toward what should be done to improve the CPI. He said BLS should be in a permanent revision mode. Because different aspects of the CPI interact with each other, a change in the expenditure weights would complement other steps that could be taken, such as changing the POPS sample more often than every 5 years, increasing the size of the CEX, and using estimation methods to adjust for changes in the quality of items in the CPI. Two of the three remaining members of the Boskin commission with whom we spoke told us that the expenditure weights should be updated more frequently than every 5 years. The other member, citing concern about resource constraints faced by BLS, gave preference to providing financial support to implement the Boskin commission’s recommendations to improve the CPI, rather than funding a more frequent update of the market basket. The remaining three individuals we spoke with also supported a more frequent update than about every 10 years as a more accurate way to track inflation. One of them said that doing so would not necessarily lead to lower measures of inflation. In addition, although we did not interview the Chairman of the Board of Governors of the Federal Reserve System, we noted that he stated in a speech, in March 1997, that there was a bias problem in the CPI given the failure to change the expenditure weights more often than about every 10 years. However, a representative of the Federal Reserve, in commenting on a draft of this report, said that although the Federal Reserve Chairman has said that out-of-date weights are a source of bias in the CPI, the Chairman does not endorse merely updating them more frequently. He said that the Chairman has testified before Congress in support of changes recommended by the Boskin commission, and that the payoff of departing from a fixed-weight structure of the CPI is likely to be much more important in improving the accuracy of the CPI than more frequent updating of expenditure weights alone. As previously mentioned, the United States is one of the seven leading industrial countries—the G-7—that have met to coordinate economic and monetary policy. In addition to the United States, the other six G-7 countries also track consumer prices through a market basket of goods and services and weight the prices of the items in the market basket. According to a BLS official and published information, Japan and Italy update expenditure weights every 5 years; Germany updates, on average, about every 5 years; Canada updates every 4 years—except for the last update (6 years), when it reengineered its index; and France and the United Kingdom update every year. However, BLS officials noted that some of these countries base their updates on national data that are not comparable to the U.S. continuing CEX. Some of the countries do not use expenditure data collected directly from consumers; others use consumer expenditure data that require respondents to recall 12 months of expenditure data. BLS officials said the system used in the U.S. CPI for maintaining current and representative samples of items to price is more advanced than in most other countries. We are not endorsing the practices in any other country over BLS’ practices. We provide this information for comparative purposes to show the priority other countries place on keeping their market baskets current. The estimated cost of updating expenditure weights is relatively small in comparison to the cost of major revisions. For the purposes of estimating costs, we assumed in this report that updates of expenditure weights would occur in 1992 and 2003, which is 5 years after major revisions. On the basis of data supplied by BLS, the estimated cost to have updated the weights in 1992 would have been $2.4 million spread over 3 years. According to BLS, the estimated cost to update the expenditure weights in 2003 would be $3.1 million over a 3-year budget period. BLS reported that the 1987 major revision cost $47 million over 5 years. According to BLS, the cost for the planned 1998 revision is expected to be about $66 million over 6 years. BLS noted that the estimated cost of an update excluded many activities that were included in the costs for revisions to the CPI. These excluded activities include using recent decennial census data to reselect geographic areas and housing samples used in the CPI’s surveys; evaluating, replacing, and updating CPI data processing systems; and recategorizing the items in the CPI market basket. The activities that BLS included in its estimated cost to update the expenditure weights included changing the weights of the items in the market basket, redefining the item strata in a limited way, and including any new items as a result of the limited redefinition of the item strata. For example, the costs of an update might include those associated with adding an item stratum for cellular telephone services. Because the CPI is used to index federal income tax brackets and certain federal spending, changes in the CPI can affect the federal budget. BLS, at our request, estimated the impact on the CPI if the expenditure weights were updated on a 5-year cycle, and, with the help of CBO, we used those estimates and their midpoint to illustrate the effect that updating might have on the federal budget. In making these estimates, both we and CBO assumed that there were no other changes in tax or spending levels and no other changes in the economy during the periods under review. BLS said the historical evidence suggests that shifting to a 5-year update of the market basket weights could reduce the annual rate of growth of the CPI by between 0 (zero) and 0.2 percentage point. Although CBO does not backcast, using a method discussed with it, we estimated that an update in 1992 would have reduced the federal budget deficit between $0 and $32.4 billion over the 6-year period until the implementation of the upcoming 1998 revision. According to CBO, an update in 2003 could increase the projected budget surplus between $0 and $20.2 billion over a 4-year period. Using estimates provided by CBO for an annual 0.1 percentage point reduction in CPI growth and assuming no other changes in policy or the economy, we estimated that the federal deficit would have been reduced by a cumulative total of $16.2 billion over the 6 years following an update in 1992. According to CBO, an update in 2003 in which CPI growth would be reduced annually by 0.1 percentage point and assuming that nothing else changed, the projected federal budget surplus would be increased by a cumulative total of $10.8 billion over the 4 years following the update. As shown in figure 1, most of the impact of such a reduction in the CPI would be on federal outlays—such as reduced payments to Social Security beneficiaries, which account for most of the outlays—and most of the impact would occur in the later years. For example, according to estimates by SSA’s actuaries, the average monthly benefit check for retired workers in 2004 would be reduced by $0.91, from $939.94 to $939.03, with an annual 0.1 percentage point reduction in CPI growth; by the fourth year (2007), the average monthly check would be reduced by $3.83, from $1,032.56 to $1,028.73. BLS has taken actions to respond to the 1961 Stigler committee study, and the current BLS Commissioner told us one response in particular enabled BLS to markedly improve the representativeness of consumer items and prices in the CPI. However, BLS had not acted on the Stigler committee’s suggestion to update the more volatile categories of weights at least once every 5 years, and BLS officials cited several reasons for not doing so. The most important reason, they said, was a lack of empirical evidence to support more frequent updates and a void of theoretical guidance on how often to do them. The officials said that another reason was previous difficulties in obtaining funds for major revisions of and improvements to the CPI. They also said that, in the past, certain data necessary to update expenditure weights were unavailable between major revisions, but that situation has changed. In addition, BLS cited its concern with what would be the best approach to improve the CPI to make it more reflective of current consumer spending. We examined the information surrounding these reasons, and, aside from the issue of funding, which is unpredictable at this point, we concluded that the evidence suggests that more frequent updates would be beneficial. As of August 1997, BLS was studying how often to update the expenditure weights. The current BLS Commissioner said BLS has implemented many of the Stigler committee’s recommendations. In her view, the most important Stigler committee recommendation concerned the selection for price tracking of individual items of goods and services. She said that in response to the recommendation, BLS developed and began using the POPS in 1978 to identify sales outlets, which has allowed BLS to incorporate new items into the CPI that otherwise would not have been incorporated until a major revision. She said that under the methodology used with the POPS, 20 percent of the outlets and items tracked are newly selected each year, which changes the entire sample within 5 years. Thus, according to the Commissioner, a large part of the reason for wanting to update weights more frequently—maintaining the representativeness of the item and outlet samples—is already accomplished on a 5-year rotation. Evaluating BLS’ response to every Stigler committee recommendation was not the purpose of this review. But, regarding the Commissioner’s statement of making the CPI more current through the use of the new POPS methodology, we believe that changing the procedures used to select retail outlets and items does make the CPI somewhat more representative of what consumers are purchasing. However, implementation of the POPS still does not address the expenditure weights for the 206 item strata that remain fixed until the entire market basket is revised. A 5-year rotation in the POPS does improve the CPI in terms of keeping item samples current and introducing new goods, which also updates the point-of-purchase weights every 5 years. Even though BLS has applied a current-is-better approach to point-of-purchase weights, it has not applied that same approach to the expenditure weights. Consequently, the items in the market basket are still aggregated into the CPI with expenditure weights that reflect outdated consumer purchases. We spoke with the current BLS Commissioner and other current BLS officials about the timing of major revisions and about the obstacles to updating the weights. We discussed the timing of major revisions because expenditure weights only have been updated during these revisions. The Commissioner said she intuitively agreed that a 10-year period is long, but she was not sure what time frame was best. She provided several reasons for the length of time between major revisions to the CPI and reasons why BLS was uncertain about undertaking a weight update independent of a major revision. As previously mentioned, we also obtained the comments of two former BLS commissioners. The reasons given by these three BLS commissioners are presented in the following subsections along with, as appropriate, our related evaluation. More frequent updating of the CPI weights has not been at the top of BLS’ priority list, according to the current Commissioner, who also said that only recently has there been any systematic evidence that inflation, as measured by the CPI, is affected by the age of expenditure weights. She also said that even this evidence is limited and weak. In addition, according to the Commissioner, there is neither a theoretically “best” frequency for updating the weights, nor any theoretical reason why more recent weights are “better.” Therefore, she said, the decision on how often to update must be made on commonsense and cost-benefit terms. BLS’ view that there is insufficient evidence to support more frequent updates is long-standing. In 1961, the then BLS Commissioner testified before a congressional committee that there was no evidence to support more frequent weight updates, and he cited a need for additional research. The preponderance of the data we reviewed does not support BLS’ statement, as stated currently or in 1961, that there is insufficient empirical evidence to support the need for more frequent updating of expenditure weights. In his 1961 testimony, the then Commissioner cited three studies on which he based his conclusion, and each study covered a different group of years between 1925 and 1939. For the first two periods, the old weights produced less of a decline in prices than the new weights. For the last period, the old weights produced more of an increase in prices than the new weights. The differences between the indexes produced by the old and new weights were 0.1 percentage point, 1.2 percentage point, and 0.1 percentage point, respectively. In 1953, BLS revised the CPI and updated the expenditure weights. It applied the new weights to January through June 1953 consumer price data. In response to a presidential request, BLS also applied the weights used in the 1940 revision to the January through June 1953 price data. The index with the old weights showed an annual understatement of inflation of 0.5 percentage point in comparison with the index using the new weights. In his 1961 testimony, the BLS Commissioner explained that the 1953 comparisons were different from those previously described for 1925 through 1939. The differences that were found in the studies for the earlier years basically reflected changes in expenditure weights. He noted that the comparisons for the 1953 revision reflected factors, such as changes in cities in the CPI, in addition to expenditure weight changes. As shown and analyzed in appendix III, additional empirical evidence has become available since 1961 that also indicates that the measurement of inflation is affected by the age of the expenditure weights. For example, BLS estimates that when the upcoming 1998 revision is introduced, CPI growth will be lowered by 0.1 or 0.2 percentage point. In addition, we reviewed historical data from overlap studies in which BLS continued calculating the CPI with both the old and new weights for 6 months following a major revision. Although it is impossible to identify exactly to what extent other factors contributed to the differences, indexes produced by the old weights overstated inflation in comparison to those indexes produced by the new weights in the 1964 and 1987 overlap comparisons. The reverse was true in 1978, but that difference may have been due to a fundamental change in BLS procedures associated with the implementation of the POPS. Additional BLS studies of indexes that examine the effects of the age of expenditure weights include comparisons of indexes that were calculated with alternative 3-year base periods in which BLS compared the official CPI’s 3-year base period of 1982 through 1984 with alternative 3-year periods since then. For example, the analysis for an update with a 1987 through 1989 base averaged 0.11 percentage point lower over a 5-year period than an index with the 1982 through 1984 base years. As a result of our examination of these and other BLS studies, we concluded that the best available evidence indicates that indexes with newer weights reduce the growth of the CPI by about 0.1 percentage point per year. Although theoretical guidance is not available on all facets of updating expenditure weights, such as exactly when to update, economic literature suggests that an index is more accurate if the expenditure weights used to compute it represent, as much as practical, current consumer spending. In addition, the Stigler committee in 1961 provided a commonsense principle on when to revise the weights: a revision is necessary when the weight base has changed appreciably. On the basis of the statements of individuals with whom we talked (see discussion of these views on pp. 11 to 13) and the CPI weight comparisons we reviewed (previously presented in this subsection and in greater detail in app. III), there is sufficient reason to believe that the weight base had changed appreciably before major revisions to the CPI. The BLS Commissioner pointed to using common sense and a cost-benefit analysis to provide guidance on how often to update the weights. At the time of the Stigler committee’s report in 1961 and until the early 1970s, the CPI was used in a very limited way in the federal sector to index federal programs for the effects of inflation; the first large income program to be adjusted with the CPI was civil service retirement payments in 1962. Therefore, changes in the CPI had relatively little effect on federal expenditures and had no direct impact on federal receipts. In the 1990s, however, the federal government uses the CPI to index a much broader set of programs, including federal income tax brackets and certain federal payments, that directly affect a larger share of the population and a much larger volume of federal receipts and expenditures. These uses, in our view, provide a commonsense basis for making the index a more accurate reflection of what consumers are buying in a rapidly changing economy. Updating the expenditure weights more often appears to be supported from a cost-benefit standpoint as well. BLS estimated that it would cost $3.1 million over 3 years to update the expenditure weights in 2003. If the growth in the CPI decreased by 0.1 percentage point per year, CBO estimates show that this would lead to a cumulative total of a $10.8 billion increase in the projected budget surplus over the 4 years after the update.However, regardless of the effect on the budget, the point of doing an update outside of a major revision would be the increased value of having the CPI reflect, as closely as practical, the current spending patterns of consumers. Funding for past major revisions has not always been easy to obtain. The 1987 revision was delayed 1 year because of funding limits. Similarly, the scheduled 1998 revision’s start was delayed 1 year, until 1995, because of funding limits. A former BLS Commissioner with whom we spoke also identified funding as a problem. She said that funding for past major revisions was held up either within the Department of Labor, by OMB, or in the appropriations process. Even before the tenure of this former Commissioner, obtaining funds to revise the CPI was apparently a problem. In 1961, the then BLS Commissioner, in testifying before a congressional committee, explained that under the then present practice, revisions to the CPI were undertaken only when BLS was successful in convincing the Bureau of the Budget—now OMB—and Congress that there was an urgent need to bring the CPI up to date. BLS also has had difficulty in obtaining funds, apart from a major revision for improvements to the CPI in the early 1990s. BLS officials told us that they had requested $450,000 for the CPI to improve quality adjustment procedures in consumer electronics, shelter, and apparel, but it did not receive these funds from Congress. BLS officials said CEX data and decennial census data are essential to establishing the CPI expenditure weights. CEX data are obtained from consumers and identify the items of goods and services that consumers have been purchasing. Among their uses in the CPI, decennial census data are used in selecting the geographic locations from which samples of consumers are surveyed for CEX purposes. The BLS Commissioner said, in 1961, that more frequent updates required a continuing CEX, which did not exist at that time. The current Commissioner said that, until the institution of the continuing CEX that allowed BLS to decrease the number of units surveyed and to make the CEX an ongoing program, an update of the weights every 5 years would have been costly because it would have required a special CEX. However, a continuing CEX program has been established, and, from a practical perspective, the first possible 5-year update of the weights using CEX data would have been in 1992, which is 5 years after the first CPI revision (1987) that used the continuing CEX data. The need to have the CPI reflect geographic movement of the population as measured by the decennial census was cited by BLS officials as a reason to make a major revision every 10 years. However, BLS officials said that they do not have to wait for new decennial census data before updating expenditure weights. Updating the market basket expenditure weights is not the only way in which BLS could make the CPI more representative of current consumer spending. The Boskin commission recommended another way, which was through the concept of superlative index formulas. Although BLS plans to publish a superlative index, BLS does not see it as a replacement for the fixed market basket CPI. As long as BLS publishes the fixed market basket CPI, updating the weights more often than once a decade would remain important. BLS has the technical ability now to update the expenditure weights and, in commenting on a draft of this report, the BLS Commissioner said BLS was developing a new updating policy. To develop this policy, according to the Commissioner, BLS was studying what frequency will yield the most accurate CPI and best support the CPI’s many uses. According to the BLS Commissioner, the use of superlative indexes, such as those BLS is producing on an experimental basis, is the appropriate way to address what the Stigler committee sought in its recommendation for more frequent updating of weights. That is, a superlative index reflects changes in consumer spending in response to changes in relative prices, and, under certain assumptions, a superlative index is free of upper-level substitution bias. In commenting on a draft of this report, the BLS Commissioner said that since superlative indexes can be published only with a representational lag because of the lack of current-period expenditure as well as price data, their use in the CPI is precluded. The CPI, she said, is produced monthly and revised only in unusual circumstances. Although BLS plans to begin publication of a superlative formula index in 2002, BLS officials stated that they also plan to continue to publish the fixed market basket CPIs. Therefore, those who use the CPI for escalation purposes would have to choose among the published CPIs, including the fixed market basket CPI. In addition, the federal government’s use of the CPI is legislatively tied to the fixed market basket concept in some instances. For example, the U.S. tax code specifically identifies the use of the CPI-U for automatic inflation adjustments of federal income tax brackets and deductions for personal exemptions. Therefore, unless otherwise changed by legislation or unless BLS named its superlative index the CPI-U, the fixed market basket CPI would be used in these programs. The Boskin commission recommended that BLS replace the fixed market basket CPI with two new index formulas as follows: (1) an index that would be updated annually and revised historically to incorporate measurement improvements and (2) a monthly index that would be based on a “trailing” 2- or 3-year average of CEX data. According to the Chairman of the Boskin commission, the annual index would use a true superlative index formula, whereas the monthly index would use a modified superlative index formula. He said the spirit behind the commission’s recommendation was for BLS to move as close as practical to creating a CPI that would be reflective of the cost of living. Although BLS plans to publish a superlative index in 2002, it has not decided how this index will be constructed. The Commissioner said that true superlative indexes cannot be produced in “real time,” or monthly, because they require current expenditure data, which are impossible to collect and process on a monthly basis. In July 1997, another BLS official told us that the superlative index that BLS plans to publish in 2002 may be (1) an annual number with a 2-year lag that, for example, would reflect inflation for the year 2000 or (2) a current measure that would be subject to revision as more current expenditure data become available. According to the BLS official, this second measure would not be considered a true superlative index until the more current expenditure information was incorporated. Although BLS has not decided on the construction of a superlative index, one approach or index formula it is considering is referred to as a Fisher Ideal superlative index. As explained in appendix IV, two different index values are combined to produce a Fisher Ideal superlative index. One of the two index values is based on the Laspeyres index formula, which is the formula used to produce the official CPI. In other words, values from the CPI’s fixed market basket would be inputs to the Fisher Ideal index. The other index value used in this superlative computation is based on a Paasche formula, which, unlike the Laspeyres formula, incorporates current expenditure weights. In general terms, the Fisher Ideal superlative index reaches the middle ground between the Laspeyres and Paasche formulas. According to BLS officials, BLS has the technical ability to update the expenditure weights more frequently. However, as of June 1997, BLS was undecided as to whether it would update the weights outside of major revisions to the CPI. The Deputy BLS Commissioner said BLS was still considering the matter and, as a first step, needed to make a decision within BLS about updating the expenditure weights at times other than major revisions. In August 1997, in commenting on a draft of this report, the BLS Commissioner said BLS was developing a new updating policy. To develop this policy and before making a final determination, BLS plans to study a number of practical questions and the Commissioner said that BLS will seek the advice of its advisory councils, all with the intent of determining the best frequency for updating the CPI weights. As the principal source of information on consumer prices and inflation in the United States, the CPI should reflect current consumer expenditures as much as practical. That clearly was the view of the Stigler committee in 1961 and of the Boskin commission in 1996. One step that BLS could take to advance that concept is to update the expenditure weights of the CPI more often than only during major revisions to the CPI. The current practice of updating weights only as part of a revision means that it is 10 years or more between updates of the expenditure weights; this appears too long to achieve a reasonable representation of current consumer spending. The BLS Commissioner said that she intuitively believed that 10 years between updates was too long. The two former BLS managers and eight CPI researchers with whom we spoke all believed that, conceptually, the weights should be updated more often than every 10 years. According to a BLS official and published information, other G-7 countries update expenditure weights more often than every 10 years. However, BLS has held for some time that, although 10 years between updates may seem inappropriate, there is no strong empirical evidence that suggests a connection between the age of the weights and the measurement of inflation. Our examination of BLS data, however, showed that the age of expenditure weights affects the measure of inflation. Although the data are not perfect and do not isolate the effects of using outdated expenditure weights, comparisons of price indexes employing BLS data with old and new weights indicate that price indexes computed with more current weights were always different than indexes computed with older weights. This result has been the case going back to comparisons made for the first revision in 1940. In addition, comparisons generally tend to show lower rates of inflation with indexes using newer weights. There are also reasons for making certain that the expenditure weights are, as much as practical, reflective of current consumer spending. Since 1962, the CPI has been legislatively connected to adjusting some benefit payments for inflation and more recently to adjusting federal tax brackets. As a result, any overstatement or understatement of inflation by the CPI can have a major impact on the federal budget. For example, if, beginning in 2003, CPI growth were annually reduced by 0.1 percentage point and all policies and the economy remained unchanged, CBO estimates that the federal budget surplus over the 4 years following 2003 would be cumulatively $10.8 billion higher. We recognize that gaining financial support for revising or improving the CPI has been a problem at times. We cannot predict the ease or difficulty BLS might have in getting funds to update the weights more often (e.g., $3.1 million over 3 years that BLS estimated it would need for an update in 2003). We also recognize that adjusting the weights more often does not have the highest priority among all commentators on the CPI. For example, the Boskin commission would rather see BLS replace the fixed market basket CPI with various types of superlative indexes. However, even if BLS published a superlative index, which it plans to do, the updating of the weights more often would remain significant because BLS does not view superlative indexes as replacements for the fixed market basket CPI. It plans to continue to publish the long-standing fixed market basket CPI. In addition, since BLS is still trying to address basic conceptual issues in designing the superlative-type index that it plans to publish in 2002, the uncertainty surrounding this planned index suggests to us that making the fixed market basket index as current and accurate as possible should be done. We recommend that, as long as a fixed market basket CPI is published, the Commissioner of BLS should update the expenditure weights of the CPI’s market basket of goods and services more frequently than every 10 years to make it more timely in its representation of consumer expenditures. We sent a draft of this report to the Secretary of Labor, the Chair of CEA, the Director of OMB, and the Chairman of the Board of Governors of the Federal Reserve System and requested comments from them or their designees. The Commissioner of BLS provided comments for the Department of Labor, and said she supports more frequent updates of the expenditure weights. However, the Commissioner said neither economic theory nor empirical evidence demonstrates the superiority of any particular update interval. She said that BLS needs to consider carefully what frequency will yield the most accurate CPI and best support the many uses of the index. There are, she said, a number of practical questions related to developing a new updating policy that BLS must address. BLS is currently studying these questions but, she said, the ultimate decision rests largely on commonsense judgment. Finally, the Commissioner emphasized that BLS will not evaluate potential changes to calculating the CPI on whether they raise or lower the measured rate of price change. Rather, BLS will evaluate potential changes on whether they produce a more accurate index. We agree with the Commissioner’s statement that potential changes to the CPI should be predicated on whether they produce a more accurate index. The Commissioner was silent as to whether the new policy would direct an expenditure weight update between major revisions, which have been every 10 years or so. Although we cannot say exactly how often the expenditure weights should be updated, the evidence we reviewed suggested that updating only once every 10 years or so was insufficient. The Commissioner’s August 8, 1997, letter is reprinted at appendix V. BLS provided technical comments on the draft report by separate communication, and we incorporated them as appropriate. By a letter dated August 8, 1997 (see app. VI), CEA’s Director of Macroeconomic Forecasting said more frequent updating would be a small improvement and ought to be considered. However, the Director hoped that readers of this report do not confuse more frequent updating with the adoption of a true cost-of-living index. In an August 12, 1997, letter (see app. VII), OMB’s Associate Director for Economic Policy said that frequent updating of expenditure weights is one important option. However, OMB believed that BLS should consider more frequent updating in context with other potential improvements. The Associate Director pointed out that substitution bias would remain in a Laspeyres-type index even with more frequent updating of expenditure weights. The CPI is a Laspeyres-type index. The Federal Reserve’s designee, the Assistant Director and Chief of the Economic Activity Section, Division of Research and Statistics, said in a July 31, 1997, letter (see app. VIII) that the draft report addressed a very important public policy issue. He said that more frequent updating of the expenditure weights would be desirable absent other actions to improve the CPI’s accuracy. However, other changes to the CPI, such as those recommended by the Boskin commission, may do more to improve the CPI’s accuracy. He said our recommendation for more frequent updating seemed to be only a “second-best” solution, which the Federal Reserve did not endorse. The “first-best” solution, he said, is for BLS to depart from the fixed-weight structure of the CPI. “Economic theory provides an elegant rationale for the use of superlative index formulas . . . to provide approximations to a cost-of-living index. . . . The unfortunate limitation of superlative formulas is that their calculation requires current-period expenditure as well as price data, so superlative indexes can be published only with a lag. This precludes their use in the CPI . . . .” As we previously discussed in this report and as the Commissioner mentioned in her comments, the administration has asked Congress for funds to produce a BLS superlative index beginning in 2002. BLS also plans to continue to publish the Laspeyres fixed market basket CPI. While we agree with those who commented that updating the expenditure weights is not a fix for turning the CPI into a true cost-of-living index, we believe that such updating makes sense for the fixed market basket CPI as long as BLS continues to publish it. The Federal Reserve and CEA designees also expressed concern as to whether we overstated the effect of more frequent updating on the CPI. Both cited one estimate (0.04 percentage point reduction) from a February 1997 research paper written by a BLS official to support their concern. We believe we have not overstated the potential effect. We report that a 5-year update of the expenditure weights could reduce the CPI’s rate of growth by between 0 (zero) and 0.2 percentage point per year. This range was estimated by BLS on the basis of historical evidence, which was provided in this February 1997 research paper. BLS has raised no second thoughts to us about the reasonableness of this range. For example, BLS did not question the range in commenting on our draft report. As we report in appendix III, the BLS research paper provided a number of different point estimates that are based on regression analyses, overlap comparisons, and other studies. The regression analysis from which the 0.04 percentage point estimate was derived found evidence of a small effect—rather than no effect—on measured inflation. In addition, the 0.04 percentage point estimate was within the lower end of the range of estimates that BLS provided to us. CEA, OMB, and the Federal Reserve each made additional comments, which are addressed as appropriate in appendixes VI, VII, and VIII. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Chairman of this Committee; the Chairmen and Ranking Minority Members of other interested congressional committees; the Secretary of Labor and the Commissioner of BLS; the Director and the Chief Statistician of OMB; the Chair of the Council of Economic Advisers; and the Chairman of the Board of Governors of the Federal Reserve System. We will also make copies available to others on request. Major contributors to this report are listed in appendix IX. If you have any questions about this report, please call either of us. Bernard Ungar can be reached on (202) 512-8676, and James Bothwell can be reached on (202) 512-6209. To obtain views on updating the Consumer Price Index’s (CPI) expenditure weights more often than every 10 years, which addressed our first objective, we asked two former Bureau of Labor Statistics (BLS) officials and eight individuals who have studied the CPI for their views on how often the CPI should be updated. Of these eight individuals, one was a member of the Stigler committee; four were members of the Boskin commission; one had developed a superlative index theory that BLS was considering in connection with the CPI; and two had studied the Boskin commission’s report. These latter three researchers were recommended to us by the Boskin commission members we interviewed or others because their views were neutral or differed from the Boskin commission’s position on the amount of bias in the CPI. We also reviewed public statements made by the Chairman of the Board of Governors of the Federal Reserve System concerning the frequency of updating the CPI. To obtain information on the practices followed by other industrialized countries in updating their consumer price indexes, which also addresses our first objective, we obtained information from BLS and from publications of the Organization for Economic Cooperation and Development and the Canadian government on how often the G-7 countries update their CPIs. To estimate the cost to BLS of updating the CPI on a 5-year cycle, our second objective, we asked BLS to provide us with certain actual and estimated cost data. We asked BLS to provide us with the costs associated with the last major revision of the CPI, which took place in 1987, and the projected costs for the 1998 revision. In addition, we asked BLS for its estimate of what the costs would have been to update the CPI in 1992 and its estimate of what the cost might be to update the CPI in 2003. In other words, we asked BLS to provide cost data for a prior revision (1987), a planned revision (1998), and two updates (1992 and 2003). The interval between 1992 and 1998 is 6 years rather than 5 years, but that difference was unavoidable given that a major revision is scheduled for 1998. We did not specify to BLS what assumptions to make or what items to include or exclude in estimating costs for 1992, 1998, and 2003. We did not evaluate the reasonableness of BLS’ assumptions or estimates. BLS provided cost data for the 1987 revision, the upcoming 1998 revision, and a 2003 update. BLS suggested that the cost for a 1992 update could be derived by deflating the cost of the 2003 update. For this conversion, we compared the Object Class 11 index published by the Office of Management and Budget (OMB), which is used to adjust pay categories, and the Gross Domestic Product (GDP) price index as determined by the Department of Commerce’s Bureau of Economic Analysis, which is used to adjust all other budget categories. We found minor differences between using the two deflators and chose to use the GDP price index, which, in comparison to the Object Class 11 deflator, led to a slight overstatement of the cost and, in reference to our cost-benefit comparison, provided a conservative estimate. To address our third objective—estimate the dollar effect on the federal budget if the CPI weights were updated on a 5-year cycle—we obtained assistance from BLS and the Congressional Budget Office (CBO). We asked BLS to estimate whether the CPI would go up or down as the result of a 5-year update. More specifically, we asked BLS to provide a range—an upper percentage point and a lower percentage point—of the possible change that could occur to the CPI with a 5-year update. To gauge the reasonableness of estimates that BLS provided, we compared them with the results of BLS’ overlap studies of old and new weights during the first 6 months of major revisions and other BLS studies that examined the impact of more frequent updates. The results of these comparisons are reported in appendix III. We then asked CBO to estimate the effects that a 5-year update of the CPI in 2003 would have on federal outlays, revenues, debt service, and the overall budget, assuming no other changes in tax or spending levels and no other changes in the economy. To do this, we asked CBO to use BLS’ lower and upper estimates and the midpoint of these estimates of change in the CPI that would result from a 5-year update. To illustrate the effect of a 5-year update that would begin in 2003, we asked CBO first to apply its standard projections for the years 2004 through 2007; the results represented CBO’s baseline. CBO then made additional projections for the years 2004 to 2007 to account for changes in the CPI as estimated by BLS for a 5-year update, and we compared these projections against CBO’s baseline. To adjust the CPI for the effects of a 5-year update of the expenditure weights, CBO reduced its estimated CPI by 0.2 percentage point, which BLS had estimated could be the upper estimate of change in the CPI from a 5-year update. The difference between the baseline and the adjusted CPI estimates was reported by us as the upper estimate of the dollar effect of a 5-year update on the federal budget. Similar estimates and calculations were made with the midpoint—0.1 percentage point—of BLS’ estimate. No further estimates and calculations from the baseline were necessary to account for BLS’ lower estimate of change, which was 0 (zero). We also reported these estimates in footnote 23 of this report in 1997 constant dollars by applying the GDP price index. We calculated the 1997 constant-dollar amounts using the GDP price index. Dollar amounts for years other than the base year (1997) were adjusted for the effect of inflation with the GDP price index. These adjustments had the effect of increasing the amounts for the years before 1997 and of decreasing the amounts for the years after 1997. CBO would provide projections for current and future years but not provide estimates for past years. Therefore, after discussion with CBO, we estimated how the federal budget deficit might have been affected if the expenditure weights were updated in 1992. In making our estimates, we assumed that there were no other changes in tax or spending levels and no other changes in the economy for 1992 through 2007. We also assumed that the economic trend that was found for the years 1998 through 2007 could be reasonably applied to the years 1993 through 1998. We first replicated CBO’s estimates for outlays and revenues that are affected by the CPI for 1998 through 2007 from projections for the relevant categories published in CBO’s Economic and Budget Outlook in January 1997. We then asked CBO to follow the previously described methodology and to make estimates of a 0.1 percentage point reduction in the CPI beginning in 1997 and 2002. We also replicated CBO’s estimates of a 1.0 percentage point change in the CPI on the federal deficit that was published in January 1997 and adjusted the estimates to represent a 0.1 percentage point reduction. We then compared the effects of these two estimates on revenues and outlays and found significant differences in the revenue estimates that were due to rounding rules for tax revenues. We chose to use CBO’s published 1.0 percentage point estimates that were adjusted to represent a 0.1 percentage point reduction rather than those that were calculated for us by CBO because the published 1.0 percentage point estimates that were adjusted to represent a 0.1 reduction provided a more conservative estimate, as well as a smooth trend. The effects from the adjusted 1.0 percentage point reduction were then applied to the outlay and revenue totals that are affected by changes in the CPI in each year from 1993 to 1998. Debt service costs were calculated from the year-to-year change in CBO’s baseline debt, less the saving from changes in outlays and revenues. The current-dollar estimates derived from these calculations were adjusted with the GDP price index to 1997 constant dollars. The estimates for the 0.1 percentage point reduction were doubled to obtain estimates for a 0.2 percentage point reduction in the CPI. We met with CBO to discuss our approach, and CBO staff stated that the method and results appeared reasonable. In connection with the effect on the federal budget, we asked the Chief Actuary of the Social Security Administration (SSA) to estimate the effect a change in the CPI would have on the average benefit paid to retired workers. Your office had requested that we obtain this information to illustrate how a federal revenue or payment program that is adjusted periodically because of changes in the CPI might be indirectly affected by more frequent updating of the expenditure weights. To make the estimate, we asked SSA to use the midpoint of BLS’ range of possible change that would occur to the CPI with a 5-year update. Using that midpoint percentage point and the President’s Fiscal Year 1998 Budget assumptions, SSA estimated the change in the average monthly benefit, beginning with December 2003, which would be payable in January 2004, and continued through December 2007. Our fourth objective had the following two elements: (1) identify the reasons for the 10 years or so between revisions and (2) assess those reasons. For the first element, we interviewed present and past officials of BLS and obtained their views on why major updates to the CPI have been spaced about 10 years apart. Among the officials we interviewed were the present Commissioner of BLS and a former Commissioner of BLS. We also reviewed the 1961 congressional testimony of another BLS Commissioner in which he addressed the subject of BLS’ timetable for revising the CPI. In our assessment of BLS’ reasons, we (1) collected and analyzed information on past comparisons between indexes that applied old and new expenditure weights, which were also used to address the reasonableness of BLS’ estimates under our third objective; (2) obtained information on how BLS collects its source data for the CPI; (3) obtained BLS fiscal year 1998 budget information to ascertain BLS’ plans for future changes in its indexes; and (4) compared the estimated costs and benefits of a 5-year CPI update cycle obtained under our second and third objectives. In our assessment of past comparisons between indexes that applied old and new weights, we noted that the indexes reflect differences in addition to those directly related to changes in expenditure weights, such as conceptual changes in the structure of the market basket. These differences are a result of data limitations in that the overlap periods incorporate many factors that can be changed in a revision. With this knowledge, we treated the differences as indicators of the effect of an update of the expenditure weights because an update of the weights is unlikely to occur in isolation from the other factors that are associated with revisions. For example, a 1992 update would have incorporated a market basket that would have been based on different geographic areas because changes were made in 1986 in the geographic locations where expenditure data were collected. Such geographic changes are associated with major revisions. In addition to reviewing overlap studies, we examined the effect of the age of weights with indexes that were calculated with alternative base periods. For example, comparisons were made of the official CPI’s 3-year base of 1982 through 1984 with alternative 3-year base periods (i.e., 1987 through 1989). In these and other comparisons, we applied an economic concept that is based upon economic literature that suggests that an index is more accurate if the expenditure weights used to compute it represent, as much as practical, current consumer spending. As previously reported, this report includes, and often relies on, estimates and comparisons prepared by BLS, CBO, or SSA. We did not verify the computerized data that the agencies used in producing these estimates and comparisons. Verification, in our opinion, would have been impractical because it would have been costly and time consuming. In addition, the estimates and comparisons were within the scope of activities that BLS, CBO, and SSA normally perform. Therefore, we used their estimates and comparisons. Our work was designed to examine the importance of updating the CPI sooner than about once every 10 years. The results we obtained are intended to contribute to the discussion of how often the CPI should be updated but are not intended to represent all future effects of shortening the updating cycle. The point of shortening the updating cycle is to have the CPI reflect, as close as practical, the current spending patterns of consumers, regardless of whether the index is pushed upward or downward. Our work is also not intended to evaluate a change in the basic formula that could address substitution bias in the CPI. BLS produces the CPI by measuring the average change over time in the prices paid by urban consumers for a fixed market basket of consumer goods and services. The market basket is determined from detailed records of purchases made by thousands of individuals and families. The items selected for the market basket, such as potatoes, are to be priced each month at retail outlets, such as grocery stores, in urban areas throughout the country. According to BLS, in 1995, field representatives visited approximately 30,000 retail establishments and housing units each month, with prices collected for 94,000 items. The CPI is used as a measure of price changes to make economic decisions in the private and public sectors. According to BLS, the CPI has three major uses as follows: Economic indicator of inflation. The administration, Congress, and the Federal Reserve use trends in the CPI as an aid to formulating fiscal and monetary policies. Business and labor leaders, as well as private citizens, use the CPI as a guide to making economic decisions. Escalator for wages, benefit payments, and tax brackets. In 1996, the CPI was used by collective bargaining units to adjust the wages of 1.7 million workers. It is used to adjust some federal benefit payments for inflation. For example, in September 1996, as a result of changes in the CPI, 43.5 million Social Security beneficiaries; 6.6 million Supplemental Security Income recipients; 6.4 million railroad, military, and federal civilian retirees and survivors; and 25.8 million food stamp recipients had their benefits adjusted for inflation. The CPI is also used to adjust the federal individual income tax structure to prevent bracket creep (i.e., increases in real tax rates due solely to inflation). Some benefit payments, such as those for Social Security recipients; tax deductions for personal exemptions; and tax brackets are adjusted automatically by the CPI, rather than on the basis of discretionary policy decisions. Deflator of selected economic statistical data series. The CPI is used to adjust selected economic statistical series for price changes and to translate these series into inflation-free dollars. Examples of data series that are adjusted by the CPI include retail sales, hourly and weekly earnings, and components of the National Income and Product Accounts. The CPI was initiated during World War I, when rapid increases in the prices of goods and services, particularly in shipbuilding centers, made such an index essential for calculating cost-of-living adjustments in wages. In 1921, BLS began regular publication of an index representing the expenditures of urban wage and clerical workers, which was then called the Cost-of-Living Index. The name of the index was changed to the CPI following controversy during World War II over the index’s validity as a measure of the cost of living. According to BLS, the CPI has always been a measure of the changes in prices for goods and services purchased for family living. Major revisions were made to the CPI about every 10 years to update the fixed market basket; the next major revision is scheduled to be released in January 1998. Because consumers’ buying habits changed, new studies were made of what goods and services consumers were purchasing, and major revisions to the CPI were made in 1940, 1953, 1964, 1978, and 1987. In the 1978 major revision, several changes were made, including the publication of a new index for all urban consumers—the CPI-U. According to BLS, the CPI-U, which represents the expenditures of about 80 percent of the population, takes into account the buying patterns of professional employees, part-time workers, the self-employed, the unemployed, and retired people, as well as those previously covered in the CPI. BLS has continued publication of the older index, the CPI-W, which represents the expenditures of urban wage and clerical workers, or about 32 percent of the population. Construction of the CPI begins by selecting a collection of goods and services that is usually bought by the reference population in the index. The collection of goods and services, called items, is known as the market basket. The CPI market basket is developed from detailed expenditure information that is provided by families and individuals who participate in the Consumer Expenditure Survey (CEX). Altogether, about 29,000 individuals and families provide expenditure information for use in determining the importance, or weight, of each item in the index structure. These data are also used to select the categories of items from which specific, unique commodity and service items are selected to be priced for the CPI. BLS measures price changes each month by checking the prices of the items in the market basket and then comparing the aggregate costs of the market basket with those for the previous month. BLS field representatives obtain prices for most of the items through personal visits to approximately 30,000 retail establishments and housing units. BLS classified all CEX expenditure items into 206 item strata, which are arranged into 7 major components: (1) food and beverages; (2) housing; (3) apparel and upkeep; (4) transportation; (5) medical care; (6) entertainment; and (7) other goods and services, such as haircuts, college tuition, and bank fees. Taxes that are directly associated with the prices of specific goods and services, such as sales and excise taxes, are also included. Expenditure weights are used to give proportionate emphasis for price changes of one item in relation to other items in the CPI. Expenditure weights allow the CPI to distinguish between items that have a major impact on consumers and to provide appropriate emphases to price changes associated with these items. The weight of an item in the CPI market basket is derived from consumers’ expenditures as reported in the CEX. To compute the weight, BLS first totals the amount spent on an item stratum, such as white bread, by CEX respondents during the base weighting period. BLS then divides that total by the number of CEX responding units, which results in an average expenditure per unit. Next, the average expenditures per unit are weighted with data from the decennial census to represent the U.S. urban population. To do so, the average expenditure amounts are multiplied by certain factors to represent the geographic dispersion of the urban population. Finally, these nationwide urban expenditures on the market basket items are totaled into an aggregate amount. The 206 expenditure weights are the percentages of this aggregate amount that are spent on each of the 206 item strata (e.g., white bread). On the basis of average expenditures during the reference period, expenditure weights remain fixed, or constant, until the next major revision of the CPI and serve as a benchmark from which price comparisons are calculated. The weights of the components for the last major revision in 1987 are those as derived from the 1982 through 1984 CEX (see fig. II.1). Each month, BLS field representatives visit or call thousands of retail stores, service establishments, rental units, and doctors’ offices all over the United States. For the entire month, they record the prices of about 94,000 items. To determine which retail outlets its representatives should visit to obtain its monthly price quotations, BLS sponsors the Point-of-Purchase Survey (POPS), which is conducted by the Bureau of the Census. The survey respondents are asked, by item categories such as doctors, whether they made specific purchases and, if so, the names and locations of all places of purchases and the expenditure amounts. BLS uses the results from the survey to select outlets for pricing. This survey is conducted in approximately 20 percent of a sample of urban areas each year; as a result, the entire nonshelter sample is updated every 5 years. BLS field representatives visit each selected outlet to initially select items that will be priced either monthly or bimonthly. For each outlet, categories of items are selected for pricing. Using probability selection methods that are based on revenues and volume information that is provided by the retail outlet, BLS field representatives use a table of random numbers to select for pricing a unique item within the specified categories. The monthly price changes for the same item (e.g., cigarettes) that are collected by BLS field representatives in urban areas throughout the United States are averaged, weighted, and published. Because the concepts BLS uses to measure medical care and shelter costs are different than those used for the items previously described, the pricing of these items is approached in a different manner. BLS reported that historical evidence suggests that a 5-year update of the market basket could reduce the rate of growth of the CPI by between 0 (zero) and 0.2 percentage point per year. Regarding the effect of a 1991 update to the CPI, BLS cited one specific source of evidence that shows the rate of growth would be lower by 0.11 percentage point. In addition, BLS states that the effect of updating the expenditure weights in 1998 will likely be a reduction of 0.1 or 0.2 percentage point. As a result of this and other information, we chose 0.1 percentage point, the midpoint of BLS’ range, for the purposes of our calculations. Evidence on the possible impact of more frequent updates of expenditure weights on the rate of growth of the CPI includes overlap studies performed by BLS at the time of major revisions and other CPI index comparisons using specialized databases. BLS has been performing overlap studies for more than 50 years, and these studies consistently have shown a difference between indexes computed with old and new weights. In most of these cases, indexes computed with the old weights show a higher rate of growth than indexes computed with the new expenditure weights. Other evidence includes alternative CPI index series, which were computed using databases that allow comparisons between old and new weights. Indexes computed with new expenditure weights almost always produce different results than indexes with old weights. These same data also suggest that indexes relying on older expenditure weights typically show a higher rate of growth than indexes computed with newer expenditure weights. An upward bias in indexes computed with older expenditure weights is consistent with economic theory and other evidence. In our analysis, we applied an economic concept that an index was more accurate if the expenditure weights used to compute it represented, as much as practical, current consumer spending. BLS began performing overlap studies for its work on the first revision of the CPI in 1940. In these overlap studies, BLS computed two indexes for the same period. One index was calculated with the original weights, and one was computed with more recent weights. As a result, these overlap indexes provide evidence on the effects of updating weights over a long period and under different economic conditions. In those cases where there were no important changes in other procedures or other anomalies, the difference between these two indexes can be attributed to the change in weights. However, BLS has often instituted new procedures and improvements as a part of the revisions, along with the updates to the expenditure weights. These improvements could include changes in geographic coverage, adoption of probability sampling methods, and other changes. Those changes and any unusual economic conditions at the time of the revision limit the applicability of the overlap findings to current questions regarding the likely effects of updating expenditure weights. In all of the overlap comparisons, differences were found between indexes calculated with the new weights and indexes calculated with the old weights. In five of the seven comparisons, the indexes with the new weights recorded a lower rate of inflation than those with the old weights. In the two instances where the new weights resulted in higher rates of growth, changes in price collection methodology and the aftermath of the wartime economy might have had an effect on those results. As a part of the first revision of the CPI in 1940, BLS conducted three comparisons that used expenditure weights that were derived from 1917 through 1919 Consumer Expenditure Survey (CEX) data (old weights) and weights from 1934 through 1936 CEX data (new weights). Comparisons were conducted for three different periods using the old and the new weights: 1925 through 1929, June 1930 through March 1935, and March 1935 through December 1939. Differences were reported for all three comparisons, and, in each case, the CPI with the old weights had a higher rate of growth than the index computed with new weights. In the periods beginning 1925 and 1935, the difference between the two indexes was small, but the difference between the overlapping indexes for the period beginning in 1930 was more than 1 percent (see table III.1). In response to a presidential request, BLS also conducted an overlap study related to the 1953 revision. In this case, BLS applied the weights used in the 1940 CPI revision to January 1953 through June 1953 price data, the first months following the implementation of the 1953 revision. A comparison of the indexes computed with these two sets of weights showed that, unlike the previous 1940 revision studies where the old weights produced a higher rate of growth than the new weights, the index with the old weights showed growth of 0.5 percentage point less than the index using the new weights (see table III.2). These results may be an anomaly related to the use of the 1947 through 1949 CEX data as the base period for the 1953 revision. Consumption in those years reflected the purchases of consumers following World War II, and the change may reflect unusual changes in consumer preferences or changes in the availability of various goods and services. In the more recent series of overlap studies, BLS calculated the CPI with both the old and new weights for 6 months following a major revision. In the 1964 and the 1987 revisions, indexes computed with the old weights produced higher growth rates than indexes computed with the new weights. Overlap indexes computed for the 1978 revision showed the reverse (see table III.2). BLS suggested, however, that the lower rates of inflation produced by the older weights may have been due to a 1978 change in methods used to select items for monthly pricing (see app. II). According to BLS, an upward bias could have been reflected in the index calculated with the new weights, and could have caused the higher rates of inflation using the new weights. In addition, BLS estimated that, for the upcoming 1998 revision, measured inflation will likely be reduced by 0.1 or 0.2 percentage point. Additional evidence on the effect of more frequent updates of expenditure weights is available from comparisons of index series computed with old and new weights. These calculations are performed on databases that have been constructed to allow comparisons of various index weights and methodologies. Although these databases are not available for the historical periods covered by the overlap indexes, they make it possible to compare a number of alternative CPI series that are based on various combinations of 1- and 3-year weights for 1982 through 1995. For example, one database has been used to compare the actual CPI, which is based on 1982 through 1984 weights, with indexes that were computed with 3-year weights from the following periods: 1987 through 1989, 1988 through 1990, 1989 through 1991, and 1990 through 1992. Other databases have been used to compare CPI rates of growth that are based on various combinations of 1-year weights. Evidence from these databases suggests that changing the weights used in the computation of the CPI will usually change the rate of growth of the CPI. These rate of growth differences vary according to the database, the years used for the older and newer weights, and the specific time frame and methodology used for computing the index. As in the case of the overlap studies, most of the evidence suggests that older weights typically produce a higher rate of growth in the CPI than indexes computed with newer weights. For example, a comparison of indexes computed with the 1982 through 1984 base period and the indexes computed with weights that were based on the 1987 through 1989 periods indicated that the older weights overstated inflation by approximately 0.1 percentage point per year over a 5-year period. BLS cited this and other evidence to support its statement that a 5-year update would reduce the measured rate of inflation by between 0 and 0.2 percentage point per year. As in the case of the overlap studies, BLS noted that the difference between the two indexes cannot be attributed only to changes in the weights. In those instances when there are important changes in procedures or anomalies in the data, the differences may not be an accurate reflection of the changes in the weights. BLS also noted that the differences may be affected by the overall rates of inflation, in that differences may be larger during periods when the overall inflation rates are higher. BLS research that compared the 1982 through 1984 expenditure-based CPI (official) with alternative 3-year expenditure base periods indicated that the official CPI rose slightly faster, on average, than the alternative indexes with more current weights. For example, comparisons of the actual CPI with indexes computed with 3-year base periods starting with 1987 through 1989 and ending with 1990 through 1992 showed that the increases in the official CPI were often higher than the alternative indexes with more recent base periods, but there was no consistent finding across all of the 3-year combinations. In fact, price changes for 1994 for two of the more recent base-period indexes (1987 through 1989 and 1989 through 1991) were larger than the official CPI for that year. Two separate databases were also created that allowed additional comparisons to be made between the CPI rates of growth, using a number of alternative index calculation methods and 1-year base periods. The following different expenditure base periods were available in those databases: (1) 1986 through 1995 and (2) 1982 through 1995. These two databases allow a variety of comparisons to be made of indexes, using different base-period expenditure weights. BLS used a regression analysis to summarize the effect that the age of the weights has on price indexes. The analysis that was based on the first database provided no evidence that a price index calculated with more current weights will produce a lower rate of inflation than an index calculated with older weights. However, BLS’ regression analysis with the second database found evidence that more current base weights yield smaller estimates of price change. In other words, indexes based on older expenditure weights tended to show a higher rate of inflation than indexes with more recent weights. These data were also used to more directly address the question of the impact of a 5-year update on the rate of measured inflation. For this purpose, BLS compared inflation estimates obtained with the 1982 through 1984 weights with the inflation rates that would have been produced by 1987 through 1989 weights (see table III.3). This comparison indicated that, on average, inflation was lower by 0.11 percentage point with the 1987 through 1989 average than with the 1982 through 1984 average (i.e., a 5-year update would have reduced inflation by an average of about 0.11 percentage point per year). BLS noted, however, that the difference varied widely from year to year. A consumer price index may be computed with one of several index formulas. The purpose of this appendix is to illustrate several of those formulas. BLS constructs the CPI with a modified Laspeyres index formula.According to the economist George Stigler, a Laspeyres index formula produces an “upper” bound of the cost of living; whereas, the Paasche index formula produces a “lower” bound of the cost of living. A superlative index, such as the Fisher Ideal index formula, is regarded by economists as providing a good approximation to a cost-of-living index. To illustrate the three basic index formulas, we use information from hypothetical weekly grocery bills of a single woman who eats breakfasts and five dinners at home; the rest of her meals are eaten away from home (see table IV.1). In the first and second weeks, she purchased the same identical items, with price increases occurring in the second week for some items. In the third week, the prices of some items increased, and she altered her market basket by purchasing a different fruit. In all three weeks, we assumed that she attained the same level of satisfaction from the consumption of these food items. Price change or change in item purchased from previous week. The values for three price index formulas—Laspeyres, Paasche, and Fisher Ideal—that would be derived for our hypothetical illustration are provided in table IV.2. The following descriptions are simplified to show how the indexes differ conceptually. Although the basic concepts presented are accurate, the actual calculations would be substantially more complex. An index calculated with a Laspeyres index formula measures price changes in relation to the base period’s market basket and thereby “fixes” the market basket by holding the items in it constant. It calculates what that market basket would cost in later periods, even if some of the items were no longer purchased. For our hypothetical shopper, the Laspeyres index formula uses what she purchased in the first week as the base of the calculation—the fixed market basket. All comparisons are made with respect to the quantities of items she purchased and the prices she paid for them in the first week. Since she purchased the same items in the second week, a Laspeyres index would divide her grocery bill for the second week by the first week’s bill and obtain an index value of 101.0, as shown in table IV.2. Since the shopper bought blueberries instead of bananas in the third week, the third week’s grocery bill cannot be simply divided by the first week’s bill. An adjustment must be made to reconstruct the first week’s fixed market basket by subtracting the cost of the blueberries and adding the cost of the bananas as if she had purchased bananas in the third week. This is done to make the third week’s market basket identical to the first week’s market basket. A Laspeyres index value of 103.2 is obtained by dividing the adjusted third week’s grocery bill by the first week’s bill. An index calculated with a Paasche index formula measures price changes for a market basket containing what consumers are currently purchasing, rather than what they purchased in a previous period. This index assumes that consumers’ tastes and preferences change to maintain a constant level of satisfaction and compares the cost of the consumers’ current market basket with what it would have cost to buy this basket of goods and services in an earlier period. As shown in table IV.2 for the hypothetical illustration, the Paasche index value (101.0) is the same as the value obtained with the Laspeyres index formula (101.0) in the first 2 weeks because our hypothetical shopper purchased the same items. However, the difference between the Paasche and the Laspeyres formulas is evident in the third week when the shopper purchased blueberries in place of bananas. Instead of pricing bananas as was done in the Laspeyres calculation, blueberries remain in the market basket for the Paasche calculation and are priced as if they had been purchased in the first week. A Paasche index value of 100.9 is obtained by dividing the third week’s grocery bill by an amount that reflects the third week’s market basket priced with prices charged during the first week. Because the items, quantities, and prices changed between the second and third weeks, the Paasche index value for the third week cannot be compared with the Paasche value for the second week. The difference between these 2 weeks cannot be referred to as a price change because the shopper changed the type and quantity of fruit she purchased. The index numbers for the third week, calculated with a Paasche index formula, can be compared only with the base period—the first week. (Also, the index number for the second week can be compared only with the base period.) The Fisher Ideal index formula uses the Laspeyres and Paasche index values and, therefore, does not allow comparisons between adjacent periods. To allow comparisons of index values between adjacent periods, an adjustment—chaining—can be made to the Fisher Ideal index values. In this section, we first describe a Fisher Ideal index with the illustration of our shopper, and we then describe chaining with the Fisher Ideal index. Both the Fisher Ideal index and its chain are superlative price indexes. A Fisher Ideal index number is the square root of the product of the Laspeyres index number multiplied by the Paasche index number. For example, in the third week of our illustration, the Fisher Ideal index number of 102.0 is the square root of the product of 103.2 (Laspeyres) and 100.9 (Paasche). The result of the Fisher Ideal index is a geometric mean, which differs from an arithmetic mean, or average. For example, the third week’s arithmetic mean of 102.1 is 103.2 (Laspeyres) plus 100.9 (Paasche) divided by 2. Because the Fisher Ideal index incorporates the Paasche index value in its calculation, the limitations of the Paasche also transfer to the Fisher. For example, the comparison of the values for the third week with the values of the second week cannot be interpreted as a price change because the shopper purchased blueberries instead of bananas. As with the Paasche, the index numbers for the second and third weeks can only be compared with the base period—the first week—and not to each other. The chained Fisher Ideal index is the square root of the product of the chained Laspeyres index number multiplied by the chained Paasche index number. A chained index “chains” period-to-period indexes back to the reference period (i.e., week 1 in the hypothetical illustration). Because they are chained to each other, comparisons can be made between any sets of index number values. The Laspeyres and Paasche chained indexes are calculated similarly—the previous chained index value is multiplied by a price relative, which is a ratio of the previous and current unchained index values. For example, to chain the Laspeyres index numbers between the first and second weeks in our hypothetical illustration, the chained Laspeyres index number for the first week (100.0), which is also the base, is multiplied by the price relative of 1.01, which is the ratio between the Laspeyres index numbers for the first and second weeks (101.0 divided by 100.0). To calculate the chained value for the third week, the chained value for the second week (101.0) is multiplied by the price relative for the third week (1.02). These same procedures are followed to obtain the chained Paasche index values. Then, to obtain the chained Fisher Ideal index formula values, the square root of the product of the chained Laspeyres index number is multiplied by the chained Paasche index number. For example, the third week’s chained Fisher Ideal index number of 102.0 is the square root of the product of 103.2 (chained Laspeyres) and 100.9 (chained Paasche). The index value derived from the Laspeyres index formula attained the highest value of the three index values by the third week, supporting economists’ views that it provides an upper bound for estimating the cost of living. As shown in table IV.2, the Paasche index value was the lowest in the third week in comparison with the first week. Assuming that the shopper was equally satisfied with either fruit selection, the value as derived from the Paasche suggests that it provides a lower bound for estimating the cost of living. The superlative index values, as represented in our hypothetical illustration by the Fisher Ideal index, indicate that by using the geometric mean of the corresponding Laspeyres and Paasche indexes, one obtains an index value that resides between them. The following are GAO’s comments on the Council of Economic Advisers’ (CEA) letter dated August 8, 1997. 1.To prevent any misunderstanding, we want to clarify that we are recommending that the expenditure weights be updated more frequently than every 10 years or so. We are not recommending a specific time interval (i.e., 5 years ) as CEA suggested. 2.We agree that more frequent updating does not fix the bias arising from the fixed market basket structure of the CPI. However, the broad array of economic literature that we reviewed supported the proposition that the CPI would be improved by more frequent updating of expenditure weights. But economic theory is not available to guide the choice of a specific update interval for the CPI. We have added a statement explaining that we did not identify any theoretical guidance on how often expenditure weights should be updated. 3.In our draft report, we did not acknowledge that we disagree with the conclusion of the Greenlees paper because, for the purposes of this report, BLS indicated that there was a relationship between market basket age and measured inflation. BLS estimated that the growth in the CPI could be reduced annually anywhere from 0 to 0.2 percentage point if the expenditure weights were updated on a 5-year basis. In reaching this position, BLS took into account the information and conclusions drawn in the Greenlees paper. The following are GAO’s comments on the Office of Management and Budget’s (OMB) letter dated August 12, 1997. 1.As suggested by OMB, we obtained the views of Bureau of Economic Analysis (BEA) officials on the potential consequences for BEA’s work if the CPI expenditure weights were updated more often. The BEA Director stated that the content of Personal Consumption Expenditures (PCE) and the sources of information used to construct it make the potential effect unclear. BEA primarily uses CPI data to adjust the dollar amounts of many items in the PCE. The Director said that the goods and services in the PCE do not all correspond to those in the CPI on a one-to-one basis and that, in adjusting items in the PCE, BEA uses many different price indexes in addition to the CPI. The BEA Director said he was more concerned with the potential effect of an experimental CPI that BLS began publishing in April 1997. To address lower-level substitution bias, BLS is using a different formula on an experimental basis to aggregate the 94,000 items for which prices are collected each month into the 206 item strata. (See footnote 14 in the background section of this report.) This experimental index does not involve the subject of this report—weighting the 206 item strata for aggregation. The Director said that if and when BLS introduces this change on a permanent basis, changing the aggregation of items at the lower level is more likely to affect BEA’s work than changing the frequency of expenditure weights of the CPI. 2.We agree with the clarifications that OMB suggested and have made them throughout the report. The changes make clear that the 0.1 percentage point and the 0.2 percentage point are each an annual reduction in the growth of the CPI, and that the estimates of the effect of such reductions on the federal budget are cumulative totals. 3.The dollar estimates that CBO provided to us and that were included in our draft report factored in the rounding rules for income tax brackets. As suggested by OMB, we discussed with CBO whether it should provide additional estimates without these rounding rules. In essence, the CBO staff with whom we spoke said that setting aside the rounding rules would not produce estimates that differed much from the estimates in the draft. According to the staff, the rounding rules would affect which year departed from the trend over the number of years studied, but the cumulative dollars over those years would not change. For that reason, we did not ask CBO to produce additional estimates; therefore, the estimates that appeared in the draft report also appear in this report. The following are GAO’s comments on the Federal Reserve’s letter dated July 31, 1997. 1.We agree with the Federal Reserve’s comment that the recent paper by Shapiro and Wilcox supported the conclusion in the Greenlees paper that more frequent updating of expenditure weights will not reduce upper-level substitution bias. However, there was a certain data limitation identified in the Greenlees paper, which is probably applicable to the Shapiro and Wilcox paper. This data limitation raised questions in our minds about the conclusions Greenlees and Shapiro and Wilcox drew from their comparisons derived from these data. Each paper compared a price index that was based on 1982 through 1984 expenditure data with a price index that was based on 1986 and later expenditure data. We believe that what Greenlees identified as a potential explanation for the contrasts between the two price indexes—the incorporation of the 1980 census-based geographic samples into the CEX in 1986—could have also affected the Shapiro and Wilcox results, which was a factor their paper did not consider. We should also point out as well that BLS, for purposes of this report, estimated that the growth in the CPI could be reduced from 0 to 0.2 percentage point per year with a 5-year update of expenditure weights. In addition, Shapiro and Wilcox indicated that the trends they found in their research study were not the trends they expected. Referring to these as “elusive empirical puzzles,” they called for additional research in this area of bias in the CPI. 2.We talked with BEA’s Director and its Chief Statistician about the Federal Reserve’s comment concerning the revision history of its chained price index. We specifically asked about BEA’s use of the “Laspeyres tail” in its chained price index and if BEA had any estimates that are based on its experience with this methodology that would indicate how much routine updating of expenditure weights might matter. (Since the index with the Laspeyres tail used fixed weights and the revised chained price index used a Fisher Ideal formula, which is more reflective of current consumer spending, the amount of difference between the two price indexes could indicate the level of effect that would occur with a more frequent updating of expenditure weights.) The Chief Statistician said that he had looked at the difference between the two price indexes and that using the Laspeyres tail appeared to have a very small effect, but he had not calculated the size of its effect apart from other factors that were also changed when the index was revised. He believed these other factors probably had made a greater contribution to the difference between the two indexes than changing the price index formula. Consumer Price Index: Cost-of-Living Concepts and the Housing and Medical Care Components (GAO/GGD-96-166, Aug. 26, 1996). Economic Statistics: Status Report on the Initiative to Improve Economic Statistics (GAO/GGD-95-98, July 7, 1995). Economic Statistics: Measurement Problems Can Affect the Budget and Economic Policymaking (GAO/GGD-95-99, May 2, 1995). Prescription Drug Prices: Official Index Overstates Producer Price Inflation (GAO/HEHS-95-90, Apr. 28, 1995). Developing a Consumer Price Index for the Elderly (GAO/T-GGD-87-22, June 29, 1987). Stabilizing Social Security—Which Wage Measure Would Best Align Benefit Increases With Revenue Increases? (GAO/IMTEC-85-13, Aug. 27, 1985). Funds Needed to Develop CPI Quality Control System (GAO/GGD-83-32, Apr. 1, 1983). A CPI for Retirees Is Not Needed Now But Could Be in the Future (GAO/GGD-82-41, June 1, 1982). A Consumer Price Index for Retirees and Alternatives for Controlling Indexing (Testimony, Apr. 20, 1982). Measurement of Homeownership Costs in the Consumer Price Index Should Be Changed (GAO/PAD-81-12, Apr. 16, 1981). Alternatives for Modifying the Indexation of Federal Programs (Testimony, Mar. 10, 1981). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Consumer Price Index (CPI) market basket expenditure weights, focusing on: (1) the views of individuals who were knowledgeable of the CPI on updating the weights between major revisions to the CPI and the practices followed by other industrialized countries in updating their consumer price indexes; (2) the additional cost to the Bureau of Labor Statistics (BLS) to update the weights on a 5-year cycle; (3) the dollar effect on the federal budget if the weights were updated on a 5-year cycle; and (4) BLS' reasons as to why updates of the weights have only occurred during major revisions to the CPI, which have been about every 10 years. GAO noted that: (1) the weight of professional opinion supported updating the market basket's expenditure weights more frequently than major revisions to the CPI have been made; (2) GAO spoke with 10 individuals who were knowledgeable about the CPI, and they were unanimous in believing that 10 years between updates was too long to reflect "current" consumer spending; (3) there was less agreement among the 10 individuals, however, on exactly how often updates should occur; (4) other major industrial countries update their consumer price indexes more often than the United States, according to information provided by BLS and contained in international publications; (5) however, BLS officials noted that some of these countries based their updates on national data that are not comparable to data used by the United States; (6) the cost of updating the expenditure weights is significantly less than the cost of a major revision; (7) BLS estimated that the cost to update the weights in 2003 would be about $3.1 million; (8) in comparison, BLS estimates that it will spend about $66 million on the upcoming 1998 revision; (9) because federal tax brackets and federal payments are adjusted for inflation, a CPI that more accurately measures inflation could affect the federal budget; (10) the Congressional Budget Office estimated that, assuming no other changes in policy or economic assumptions, if updating the weights in 2003 (5 years after the planned 1998 revision) reduced the CPI growth by 0.1 percentage point annually, the projected budget surplus would be increased by a cumulative total of $10.8 billion over the 4-year period of 2004 through 2007; (11) BLS cited several reasons for not updating the expenditure weights between major CPI revisions; (12) the foremost reasons, according to BLS, were a lack of empirical evidence to support more frequent updates and a void of theoretical guidance on how often to do them; (13) although theoretical guidance is not available on all facets of updating expenditure weights, such as exactly how often updates should occur, the preponderance of the data GAO reviewed supports the need for updating expenditure weights more frequently than about every 10 years; and (14) BLS' concerns about updating the expenditure weights between major revisions were indicated in June 1997, when BLS officials said that BLS has the technical ability to update the expenditure weights, but it must work through the challenging issues that now surround the CPI program.
To participate in Medicare, hospitals must maintain standards of patient safety and health that comply with Medicare COPs. For example, the COP related to nursing services includes such requirements for hospitals as providing a 24-hour nursing service that is supervised or furnished by a registered nurse. There are currently 23 Medicare COPs. (See app. II for a description of the 23 Medicare COPs.) CMS proposed revisions to all of the COPs in 1997, but it did not finalize them. Since then, CMS has revised several of the COPs, including those concerning the life safety code; quality assessment and performance improvement; organ, tissue, and eye donations; and nurse anesthetist supervision. Health care accreditation programs other than JCAHO’s hospital accreditation program may generally adopt their own requirements if CMS determines that an accreditation program’s requirements are at least equivalent to Medicare COPs. If CMS also determines, among other things, that the accreditation program’s survey process is likely to identify any serious deficiencies in COPs, it must generally grant “deeming authority” to the accreditation program and treat entities accredited by these organizations as meeting Medicare COPs. CMS has the authority to review these programs, and it can impose a probationary period while monitoring performance and remove deeming authority if warranted. Most hospitals demonstrate compliance with standards equivalent to Medicare COPs through accreditation by JCAHO. In 2002, JCAHO accredited 4,211, or 82 percent, of Medicare-participating hospitals. Hospitals accredited by JCAHO received payments for Medicare-covered inpatient services of approximately $98 billion, or 90 percent, of the $109 billion that was spent on hospital care in 2002. JCAHO, as part of its accreditation-related activities, also develops survey procedures, trains its surveyors, and formulates performance measures. JCAHO is governed by a 29-member board of commissioners and has a staff of over 1,000. JCAHO’s deeming authority for hospitals is established in statute and therefore can only be changed by Congress. As a result of this unique statutory authority, hospitals accredited by JCAHObecause they meet JCAHO standardsare deemed to meet Medicare COPs as well. In contrast, the American Osteopathic Association (AOA)—a private, not-for- profit professional organization that offers accreditation services for hospitals and other health care organizations—holds deeming authority that is subject to CMS’s direct review and approval. While hospital accreditation is its largest program, JCAHO also has accreditation authority under Medicare for certain other health care providers, including clinical laboratories, hospices, ambulatory surgical centers, and home health care agencies. All of these other JCAHO accreditation programs are subject to CMS’s direct review and approval. To be accredited by JCAHO, a hospital must meet eligibility requirements, satisfactorily complete a triennial on-site survey process, and continue to maintain JCAHO’s standards between surveys. The accreditation surveys that JCAHO conducts every 3 years are particularly important. For most hospitals, the triennial survey is the only time that JCAHO conducts an on- site review of the hospital’s compliance with all quality standards and issues decisions on how well the hospital has complied with JCAHO’s standards. In 2004, JCAHO implemented a new hospital accreditation survey process, which, according to JCAHO, is intended to reduce the cost of accreditation to health care organizations and JCAHO, enhance public confidence that health care organizations are in continuous compliance with standards, increase the real and perceived value of accredited organizations, meet the requirements of deeming authorities and purchasers, and improve satisfaction for hospitals participating in the accreditation program. CMS exercises oversight of JCAHO’s hospital accreditation program primarily through its validation surveys and annual reports to Congress. Under federal law, CMS must continually study the operation and administration of Medicare, including validating the JCAHO hospital accreditation process, and submit annual reports to Congress. CMS has agreements with state agencies to conduct validation surveys. There are different kinds of validation surveys, including traditional validation surveyssurveys conducted on a sample of hospitals within 60 days of their triennial JCAHO survey. Traditional validation surveys provide the basis for assessing the effectiveness of JCAHO’s hospital accreditation process in detecting deficiencies in Medicare COPs, which JCAHO- accredited hospitals are treated as meeting. Validation surveys also include 18-month surveys, which monitor how well JCAHO-accredited hospitals are complying with Medicare COPs midway between their 3-year JCAHO surveys, and allegation surveys, which are triggered by complaints or other reports of situations that pose potential threats to patient health and safety in JCAHO-accredited hospitals. CMS has the authority to remove the deemed status of a JCAHO-accredited hospital where a state agency’s validation survey results in a finding that the hospital is out of compliance with one or more Medicare COP. CMS uses a rate of disparity measure to summarize the extent to which an accreditation program, such as JCAHO’s hospital accreditation program, has not found serious deficiencies identified by CMS through state agency validation surveys. For a hospital accreditation program, using the results from validation surveys, the rate of disparity for hospitals surveyed by the state survey agencies is calculated as the difference between the number of hospitals found with serious deficiencies by state agencies and the number of hospitals found with comparable deficiencies by the accreditation program, divided by the number of hospitals sampled. CMS regulations provide that if the validation survey results for an accreditation organization with deeming authority indicate a rate of disparity that reaches the threshold level of 20 percent disparity or greater, CMS will notify the organization that its deeming authority may be in jeopardy and that the agency is initiating a deeming authority review. With respect to JCAHO, CMS includes the rate of disparity in its annual reports to Congress in which it reports the results of its validation program for JCAHO’s hospital accreditation program. JCAHO’s pre-2004 hospital accreditation process often did not identify either hospitals with serious deficiencies or the individual serious deficiencies found by state survey agencies through CMS’s validation program. In a sample of 500 JCAHO-accredited hospitals, state agency validation surveys conducted in fiscal years 2000 through 2002 identified 31 percent (157 hospitals) with serious deficiencies; of these, JCAHO did not identify 78 percent (123 hospitals) as having serious deficiencies. For the same validation survey sample, the majority of the serious deficiencies state survey agencies identified but JCAHO did not were in the physical environment COP category, which covers fire safety and prevention. From fiscal years 2000 through 2002, JCAHO did not identify 123 of the 157 hospitals (78 percent) with serious deficiencies that CMS’s validation program identified out of a sample of 500 JCAHO-accredited hospitals. Table 1 shows the hospitals with serious deficiencies that state survey agencies identified and JCAHO did not during fiscal years 2000 through 2002. In 343 of the 500 hospital validation surveys, state agency surveyors did not find serious deficiencies. Both state agency surveyors and JCAHO surveyors identified 34 hospitals as having a serious deficiency. According to JCAHO, disparity between state agency and JCAHO findings in the 123 hospitals in part may be attributed to the timing of the two surveys, JCAHO’s phasing in of new requirements, different interpretations of the COPs by state surveyors, and inherent surveyor bias. However, in its comparison to determine disparity between the two surveys, CMS does consider whether it is reasonable to conclude that the deficiencies found by state survey agencies existed at the time JCAHO surveyed the hospital. From fiscal year 2000 through 2002, JCAHO did not detect 167 of the 241 serious deficiencies (69 percent) identified through CMS’s validation program from a sample of 500 JCAHO-accredited hospitals. The number of serious deficiencies found by CMS’s validation program represents 2 percent of the 11,000 Medicare COPs surveyed by state agencies in the sample and were found in 157 hospitals. However, one serious deficiency in any one of these hospitals could limit its ability to provide adequate care to its patients. For example, a serious deficiency in the nursing services COP at a hospital in Texas found by a state agency but missed by JCAHO in 2000 included such problems as failure to prepare and administer drugs in accordance with federal and state laws, inadequate supervision and evaluation of the clinical activities of nonemployee nursing personnel, and nursing care and procedures provided to patients that were not within the scope of accepted standards of practice. Among hospitals with serious deficiencies identified by CMS’s validation program but not by JCAHO, there were on average 1.1 serious deficiencies per hospital, with a range from 1 to 6. Table 2 shows the percentage of serious deficiencies identified by CMS’s validation program but not by JCAHO for fiscal years 2000 through 2002. Of the 167 serious deficiencies identified by CMS’s validation program from fiscal year 2000 through 2002 but not detected by JCAHO, 87 were related to a hospital’s physical environment, which includes life safety code standards on fire prevention and safety. For these 3 years, JCAHO did not detect 81 percent of the serious physical environment deficiencies identified by state agency surveyors. Table 3 shows the number of serious deficiencies, by category, identified by state survey agencies in CMS’s validation program but missed by JCAHO surveyors. The larger number of deficiencies in physical environment may be related to the difference in how state agencies generally survey separately a hospital’s compliance with the life safety code portion of the physical environment COP. JCAHO surveys assess compliance with the life safety code using a combination of the hospital’s self-assessment, a hospital building tour, and observations made by all surveyors during the survey process. Examples of deficiencies in physical environment that JCAHO did not identify but CMS’s validation program found in a hospital in Alabama in 2000 included the following: several exterior exits lacked emergency exit lighting; several exterior exits were illuminated only by single light bulbs; fire alarm system and fire extinguishers had not been inspected annually as required; and an automatic sprinkler system had not been inspected annually and maintained by certified personnel as required. Serious deficiencies in the COP on physical environment compromise patient safety and health. The total number of deficiencies not identified by JCAHO in the quality-of- care COP categories—those COPs that involve the oversight and delivery of patient care—is similar to the number not identified by JCAHO in the physical environment COP. While the number of serious deficiencies not found by JCAHO in individual quality-of-care COP categories is smaller than the number not found in physical environment, when these quality-of- care COPs are combined, the proportion of serious deficiencies JCAHO missed is almost 60 percent of the total number of serious deficiencies identified by state survey agencies. The following are examples of hospitals found to be out of compliance with multiple quality-of-care COPs: In 2000, CMS removed the deemed status as a Medicare provider of a JCAHO-accredited hospital in California for failure to comply with two COPs, one of which was infection control. The hospital failed to provide a sanitary environment to avoid sources and transmission of infections and communicable diseases and failed to develop a system for ensuring the sterilization of medical instruments. Also in 2000, CMS notified a hospital in Texas that if it did not implement a plan of correction the hospital’s participation in the Medicare program would be terminated. Serious deficiencies at this hospital included lack of compliance with the pharmaceutical services and nursing services COPs because medications were administered without physician orders and a double dose of narcotics was given in the emergency room, with no explanation for the excessive dosage, to a patient who later died. State surveyors in CMS’s validation program also may miss serious deficiencies. In related work on skilled nursing facilities and home health agencies, we found that the number of serious deficiencies found by state agencies was highly variable among states and may be understated. State agencies’ detection of serious deficiencies in hospitals also varied widely among states for the 3 years we reviewed. For example, state survey agencies in California, Illinois, and Ohio found serious deficiencies in over 45 percent of the surveys they conducted between fiscal years 2000 through 2002. In contrast, Florida and New York found serious deficiencies in less than 10 percent of the surveys they conducted, and Louisiana did not find serious deficiencies in any of the surveys it conducted. The potential of JCAHO’s new hospital accreditation process to improve the identification of serious deficiencies is unknown because it is too soon after its January 2004 implementation for a meaningful evaluation; in addition, JCAHO’s testing of the new process was limited. CMS has not had the opportunity to complete its validation program for 2004 to determine whether JCAHO surveyors using the new process are missing serious deficiencies later identified by state agency validation surveys. While unannounced surveys, which are planned for implementation in 2006, have the potential to improve the detection of serious deficiencies, other features of the new process that JCAHO did not test before implementation may have limitations that could affect the potential of the new process to identify problems with patient care. JCAHO’s pilot test of the new process had limitations, including using a sample of hospitals that volunteered for the pilot instead of using a random sample and self- evaluating the results instead of using an independent entity. Because JCAHO’s new accreditation process was implemented in January 2004, it is too soon to know whether the new process is better at detecting serious deficiencies in Medicare COPs than the pre-2004 accreditation process. A JCAHO official told us the new process will aid in the detection of deficiencies, but we found that some of the features may have shortcomings that could limit their effectiveness. New features of the accreditation process include the hospital’s self-assessment of compliance with accreditation standards midway through the accreditation cycle, surveyor review of the care provided to specific patients to determine the adequacy of the hospital’s health care delivery system, and performance of all accreditation surveys on an unannounced basis beginning in 2006. (See app. III for a description of selected new features of JCAHO’s new hospital accreditation process.) Periodic performance reviews assess hospital compliance with applicable standards and are performed at the 18-month midpoint between 3-year on- site accreditation surveys. According to JCAHO, the periodic performance review will have several benefits. These include providing hospitals with a process to assess their ongoing compliance and requiring them to correct or plan to correct all deficiencies identified. Periodic performance reviews must be conducted either by the hospital as a self-assessment or, if the hospital chooses, by JCAHO through an on-site review. However, periodic performance reviews may not necessarily improve the detection of deficiencies. JCAHO did not pilot test these reviews for the potential to detect deficiencies and did not test whether hospitals that conducted reviews do a better job of continuing to comply with standards. In addition, for hospitals performing self-assessments, JCAHO will not check these self-assessments to determine whether hospitals fully and accurately identified quality problems and developed adequate corrective action plans to address the problems identified. According to JCAHO, the priority focus process and patient tracer methodology together have the potential to enhance the ability of surveys to detect deficiencies by directing the attention of surveyors to key patient care areas. The priority focus process uses a data-based formula to identify a limited number of areas in each hospital that are particularly important to patient health and safety. Priority focus areas might include infection control, medication management, or patient safety. Surveyors use the priority focus process combined with the patient tracer methodology to focus their surveys to specific areas for review. The patient tracer methodology guides their choice of current patients to “trace” through the experience of care within an organization. For example, if the hospital’s priority focus process data suggest that a patient with an orthopedic-related diagnosis such as a hip fracture should be traced, the JCAHO surveyor would review the patient’s medical record, noting where the patient had entered into the hospital and any services and transfers that occurred. Then the surveyor would retrace the steps in the patient’s care process by observing and talking to staff in some of the areas in which the patient received care. If the patient entered through the emergency department, was transferred to a medical/surgical unit, and then went to the operating room, the surveyor would go to these areas to interview staff about the care given to this specific patient. With information from patient tracers, the surveyor will assess whether any compliance issues exist with JCAHO standards. If the surveyor identifies a compliance issue while tracing one patient, the surveyor may review the records of similar patients to determine whether the problem is isolated or represents a pattern of care. However, JCAHO did not test the extent to which the priority focus process and the patient tracer methodology could help surveyors detect deficiencies. A JCAHO official told us these new features of the accreditation process were intended to help surveyors trace patients in a consistent way and not necessarily to improve the detection of deficiencies. JCAHO plans to conduct all hospital accreditation surveys on an unannounced basis beginning in 2006. JCAHO stated that unannounced surveys will ensure that hospital performance is based on the observation of hospitals’ routine operations rather than on how they operate after they have the opportunity to prepare to be surveyed. A JCAHO official also indicated that unannounced surveys will be more likely to detect deficiencies. The OIG and other organizations share JCAHO’s position on the value of unannounced surveys of hospitals and other health care organizations. The value of unannounced surveys also has been recognized for nursing homes, which state agencies survey on an unannounced basis. JCAHO’s pilot test of its new hospital accreditation process was limited and therefore unable to help determine the potential of the new process to detect deficiencies in Medicare COPs. According to JCAHO, the pilot test suggests that the new process was more likely than the former process to find quality problems. However, the pilot test sample included hospitals that volunteered or were selected by JCAHO and were not randomly selected, pilot test surveyors were accompanied by observers from JCAHO’s central office, and pilot test results were not independently evaluated. In addition, CMS has not completed its fiscal year 2004 validation program, which will include hospitals surveyed by JCAHO using the new process and thus does not yet have sufficient data on which to base a meaningful evaluation. According to JCAHO’s analysis of the pilot test, the new hospital accreditation process is more likely to identify quality problems since proportionately more hospitals under the new process received unfavorable accreditation decisions. JCAHO based its conclusion on a comparison of survey outcomes, called accreditation decisions, between 18 hospitals in the pilot test conducted in 2002 and 2003 and the 1,524 hospitals that had been surveyed under the pre-2004 accreditation process during 2003. Table 4 presents the data JCAHO used to make the comparison. As shown, proportionately fewer hospitals under the new process were accredited without having to make corrections. Although JCAHO provided the accreditation decision outcomes for these 18 pilot tests, it stated it preferred to use the number of “requirements for improvement” as the basis for analysis. However, JCAHO’s pilot test analysis was limited in three respects, which may have accounted for the smaller number of favorable accreditation decisions hospitals received under the new process. The hospitals participating in the pilot test were not randomly selected by JCAHO. As a result, these hospitals may not be representative of all JCAHO-accredited hospitals and therefore results cannot be generalized. During the pilot test, an observer from JCAHO’s central office accompanied each surveyor, and the knowledge that they were being observed may have influenced the surveyors’ actions. Under the pre-2004 process, observers only rarely accompanied JCAHO surveyors. JCAHO conducted its own evaluation of pilot test results. Evaluation of the pilot test by an entity independent of either JCAHO or the hospitals tested could help to ensure that survey outcomes were impartially interpreted. For example, CMS used an independent group to evaluate its redesign of the nursing home survey process. CMS has limited oversight authority over JCAHO’s hospital accreditation program, and its existing oversight activities need improvement. The unique status of JCAHO’s hospital accreditation program, which is specified in statute, does not permit CMS to take corrective action, such as restricting or removing its deeming authority. Additionally, CMS uses a measure that provides limited information to evaluate the performance of JCAHO’s hospital accreditation program, has significantly reduced the number of surveys conducted as part of CMS’s validation program, and does not use measures that are based on sound statistical methods to assess the performance of JCAHO’s hospital accreditation program. Because of JCAHO’s unique legal status, CMS’s oversight of JCAHO’s hospital accreditation program is limited in two major ways: Unlike other accreditation programs with deeming authority, JCAHO does not have to reapply to CMS to reauthorize its deeming authority, and CMS cannot take action to address performance problems with JCAHO’s hospital accreditation program. JCAHO’s hospital accreditation program is the only Medicare accreditation program for which CMS does not have to conduct an evaluation of the accreditation standards and the processes used to conduct surveys. Without this evaluation, CMS is deprived of key oversight tools it is authorized to use with other accreditation programs: detailed information about any proposed changes to the accreditation process and public input. CMS cannot require JCAHO to provide information about proposed changes to its accreditation requirements and hospital survey processes. Also, because it is not required to reapply to CMS for deeming authority, JCAHO does not have to provide CMS information that other accreditation programs must provide, such as a detailed description of its survey processes, a comparison of its standards to Medicare requirements, and the qualifications of its surveyors, which CMS reviews to ensure that the programs comply with Medicare requirements. For example, when JCAHO’s hospice accreditation program applied for deeming status in 1999, CMS required changes to JCAHO’s hospice accreditation process, including requiring JCAHO to make unannounced surveys of Medicare- certified hospices. According to a CMS official, JCAHO’s hospital accreditation program has provided much of the information required of other accreditation organizations; however, CMS has no authority to require JCAHO to make changes to the hospital accreditation program as it does with other health care accreditation programs. Statutory provisions regarding public notice and comment do not apply to JCAHO’s hospital accreditation program as they do to other accreditation programs. The reapplication process for other accreditation programs requires affording the public an opportunity to provide input to CMS on an accreditation program’s request for deeming authority. Because JCAHO does not have to reapply for deeming authority, the public does not have the opportunity to review and comment on JCAHO’s hospital accreditation program. A second limitation is CMS’s inability to address any performance issues with JCAHO’s hospital accreditation program. Although the rate of disparity for JCAHO’s hospital accreditation program exceeded 20 percent in fiscal years 2000, 2001, and 2002 a rate that would have triggered a deeming authority review for any other Medicare accreditation programCMS was unable to take enforcement action to address JCAHO’s performance. When other Medicare accreditation programs have a rate of disparity of 20 percent or more, CMS can take steps such as imposing a year-long probationary period and removing deeming authority at the end of the probationary period if the rate of disparity remains at 20 percent or more. For JCAHO, however, CMS’s actions toward correcting the program’s deficiencies are limited to including recommendations for improvement in its annual reports to Congress and negotiating with JCAHO to voluntarily adopt CMS’s recommendations. In its annual report to Congress, CMS made recommendations in fiscal year 2002 aimed at improving JCAHO’s ability to detect serious deficiencies in the life safety code, part of the COP on physical environment. CMS noted that JCAHO permits hospitals to self-assess compliance with life safety code requirements. While CMS stated that it did not object to the concept of hospital self-assessment of life safety code requirements, it made five recommendations to JCAHO for improving implementation: 1. Require hospitals to use qualified personnel, such as fire marshals and architects, to conduct self-assessments of compliance with the life safety code requirements. 2. Set minimum standards for identifying and improving life safety code deficiencies identified by hospital self-assessments. 3. Require hospitals to submit their self-assessments on life safety code issues prior to JCAHO conducting accreditation surveys to provide surveyors and personnel in JCAHO’s central office time to review the material prior to the accreditation surveys. 4. Increase the use of JCAHO experts in the life safety code requirements in its central office. 5. Address the issue of hospitals that do not make improvement within self-determined time frames. JCAHO did not adopt all of these recommendations. It disagreed with the first recommendation. Its response indicated that its requirement to use qualified personnel to complete the self-assessment, while more general, was sufficient. It further indicated that policies were in place for CMS’s second and fifth recommendations. CMS later agreed that JCAHO’s policies do satisfactorily address the fifth recommendation. JCAHO planned to examine ways to adopt CMS’s third and fourth recommendations. CMS however, had no authority to compel JCAHO to comply with the remaining recommendations. According to CMS, it continues to discuss implementation of its recommendations with JCAHO. JCAHO stated that while its initial response to CMS’s recommendations in 2003 reflected then current JCAHO policies, subsequent policy evolutions are addressing CMS’s recommendations. Specifically, JCAHO is working with the American Society of Hospital Engineers to develop a process for review by experts of hospital self-assessments on life safety code issues prior to JCAHO’s conducting on-site accreditation surveys and to identify those hospitals for which engineering expertise should be added to on-site surveys. CMS states that the goal of its validation program is to provide reasonable assurance to Congress that the JCAHO accreditation process ensures hospital compliance with Medicare COPs. However, the measure CMS uses to evaluate the performance of JCAHO’s hospital accreditation program provides limited information and could mask problems with an accreditation program’s performance in detecting serious deficiencies, and it is based on a target sample size of 1 percent of JCAHO-accredited hospitals. In addition, CMS does not report the extent to which its sample reflects the performance of the larger population of JCAHO-accredited hospitals. The rate of disparity between JCAHO’s hospital accreditation survey findings and state survey agency findings, as currently calculated by CMS, does not fully explain the performance of JCAHO’s hospital accreditation program in detecting serious deficiencies. CMS uses this measure in its reports to Congress to assess JCAHO’s hospital accreditation program and as the basis for making recommendations for improvement. CMS calculates the rate of disparity as the difference between the number of hospitals found with serious deficiencies by state survey agencies and the number of hospitals found with serious deficiencies by the accreditation survey, divided by the number of hospitals in the sample. For example, if state survey agencies conducted 200 surveys as part of CMS’s validation program and found 60 hospitals out of compliance with at least one COP, but JCAHO’s survey found that only 22 of the hospitals were out of compliance, the rate of disparity would be 19 percent ((60 - 22)/200). CMS has established in regulation a rate of disparity of 20 percent or greater as the threshold for taking action against an accreditation program. According to a CMS official, the use of 20 percent as the threshold is not based on empirical evidence but rather on what CMS believed Congress would find acceptable. Consequently, the threshold may not be appropriately placed to indicate unacceptable performance by a hospital accreditation program. For example, if JCAHO failed to identify serious deficiencies in all 14 hospitals that the state agencies identified with serious deficiencies from a sample of 79 hospitals, the rate of disparity would be a satisfactory 18 percent ((14-0)/79). CMS’s rate of disparity measure used in isolation does not consistently reflect an accreditation program’s ability to detect serious deficiencies. As the number of hospitals with serious deficiencies detected by state survey agencies decreases, regardless of JCAHO’s performance in detecting them, it is more likely that the rate of disparity will be less than CMS’s 20 percent threshold. As a result, the performance of JCAHO’s hospital accreditation program is difficult to judge based on this measure alone. For example, if state survey agencies performed 200 validation surveys and found 100 hospitals or 50 percent with serious deficiencies and JCAHO found 30 hospitals or 30 percent of the hospitals found by state agencies, the rate of disparity would be 35 percent ((100-30)/200). However, if the state agencies found 50 hospitals, or 25 percent, of the 200 hospitals with serious deficiencies and JCAHO found 15 hospitals, or 30 percent of the hospitals that the state agencies identified, the rate of disparity would be almost 18 percent ((50-15)/200). The percentage of serious deficiencies found by state survey agencies and also by JCAHO remained the same in both examples, but the rate of disparity was improved significantly by the larger number of hospitals without serious deficiencies in the second example. This indicates that the rate of disparity does not consistently measure the accreditation program’s ability to detect serious deficiencies found by state survey agencies. (See table 5.) In addition to the rate of disparity, other components, such as the proportion of hospitals with serious deficiencies and the total number of serious deficiencies found by state agencies but missed by the accreditation program, are important indicators of an accreditation program’s overall performance. CMS does not analyze the statistical results of its validation survey samples in ways that would allow it to better assess JCAHO’s ability to detect serious deficiencies. CMS has not documented the methods it uses to select hospitals for validation surveys and did not supply us with clear technical justification for the methods used. Further, CMS’s validation sample includes hospitals that, because of its sampling method, have varying chances of selection, but it does not take this into account when calculating statistics based on the sample. According to CMS’s sampling method, the selection of hospitals is influenced by factors such as the month in the fiscal year that JCAHO performed the accreditation survey and how many hospitals were targeted for completion that year in the state in which the hospital was located. Thus, some hospitals have a greater chance of selection than others. CMS also does not take these different chances of selection into account when calculating statistics for its annual reports to Congress, which prevents CMS from accurately assessing JCAHO’s performance. Moreover, CMS does not measure and report in its annual reports the extent to which its estimates based on the validation survey sample are likely to reflect how well JCAHO detects deficiencies in the larger population of hospitals it accredits. In addition, the number of usable traditional validation surveys completed is smaller than the number of hospitals CMS samples for validation surveys. This difference may affect the accuracy of the data that CMS presents to Congress if the hospitals where the traditional surveys were completed produce different results than those where surveys are not completed or are not usable. During its sampling process, CMS selects a sample size close to the targeted number of hospitals each year. Some hospitals from this sample may be excluded because CMS chose to perform another type of survey for them that cannot be used to validate a JCAHO accreditation survey. In addition, state agencies are not always able to complete the requested traditional validation surveys within 60 days from the JCAHO accreditation survey, as required, or a hospital may be excluded because it lost its deemed status or closed. The size of the difference between the number of hospitals sampled and the number of usable traditional validation surveys completed therefore varies, as it did during the 3-year review period (see table 6). CMS reduced the number of validation surveys conducted by state agencies from a target of approximately 5 percent of the total number of hospitals that JCAHO accredits to a target of approximately 1 percent, with at least one survey in each state. Reducing the target of validation surveys from 5 percent to 1 percent results in the number of validation surveys being reduced from 227 in fiscal year 2002 to a target of 75 validation surveys in fiscal year 2003 and 72 in fiscal year 2004. Reducing the targeted number of validation surveys to 1 percent provides less reliable information on how well JCAHO’s hospital accreditation program ensures compliance with Medicare COPs. For example, for a 5- percent target, the estimate of the proportion of JCAHO-accredited hospitals with a particular deficiency that is derived from the validation survey could be as much as 6.0 percentage points higher or lower, for a range of 12.0 percentage points. If the 5-percent target produced an estimate that 50 percent of JCAHO-accredited hospitals had a particular deficiency, the percentage of JCAHO-accredited hospitals not complying could range from 44.0 to 56.0 percent. However, for a 1-percent target the estimate could be 11.4 percentage points higher or lower, for a range of about 22.8 percentage points. For example, if the 1-percent target produced an estimate that 50 percent of JCAHO-accredited hospitals had a particular deficiency, the percentage of JCAHO-accredited hospitals not complying with a Medicare COP could range from 38.6 to 61.4 percent. This reduction in the number of validation surveys is of additional concern because it coincides with the implementation of JCAHO’s new accreditation process, which has an unproven capacity to detect deficiencies. CMS’s target sample size for traditional validation surveys for fiscal year 2004 will be further reduced because the sample also includes 18-month validation surveys. In 2004, CMS is planning to conduct 17 of these 18-month surveys as part of its overall validation survey target of 72. Thus, CMS could be using as few as 55 validation surveys to determine JCAHO’s performance. For 3 consecutive years, JCAHO’s hospital accreditation program, which accredits most of the hospitals participating in Medicare, exceeded CMS’s threshold for unacceptable performance. CMS validation surveys during that time period confirmed that JCAHO missed the majority of serious deficiencies found by state survey agencies. Yet, CMS was unable to take action against JCAHO’s hospital accreditation program as it can with other accreditation programs because it lacked the authority to do so. Although CMS has recommended in its annual reports to Congress that JCAHO make changes in its hospital accreditation program to improve its ability to detect serious deficiencies, some of these recommendations have not been implemented. Thus, it is vital for patient safety that JCAHO hospital accreditation surveys detect existing serious deficiencies and deny accreditation to hospitals that do not comply with Medicare COPs. CMS is unable to present to Congress an adequate assessment of JCAHO’s performance because of limitations in its process for selecting hospitals for validation surveys and analysis of the survey results. CMS does not consistently portray the extent to which serious deficiencies are missed and does not identify the limitations in reporting the estimates it makes from its survey sample. CMS cannot assure Congress that JCAHO- accredited hospitals meet Medicare COPs because the measure for the rate of disparity, which determines poor performance, allows JCAHO to miss the majority of serious deficiencies and still be in an acceptable range of performance. Further, CMS’s reduction in the number of validation surveys it uses to determine the performance of JCAHO’s hospital accreditation program will provide less reliable information at a time when JCAHO is implementing a new hospital accreditation process that is unproven in its ability to detect serious deficiencies. In light of these limitations in CMS’s validation of JCAHO’s hospital accreditation program, we believe that CMS must improve its oversight so it can provide Congress with more accurate information regarding JCAHO’s performance. Given the serious limitations in JCAHO’s hospital accreditation program and that efforts to improve this program through informal action by CMS have not led to necessary improvements, Congress should consider giving CMS the same kind of authority over JCAHO’s hospital accreditation program that it has over all other Medicare accreditation programs. To strengthen the ability of CMS to identify and report to Congress on JCAHO’s ability to ensure that the hospitals it accredits protect the safety and health of patients through compliance with the Medicare COPs, we recommend that the Administrator of CMS take the following three actions: modify the method used to measure the rate of disparity between validation survey findings and accreditation program findings to provide a reasonable assurance that Medicare COPs are being met and consider whether additional measures are needed to accurately reflect an accreditation program’s ability to detect deficiencies in Medicare COPs; provide in the annual report to Congress an estimate, based on the validation survey sample, of the performance of all JCAHO-accredited hospitals, including the limitations and protocols for these estimates based on generally accepted sampling and statistical methodologies; and develop a written protocol for these calculations; and annually conduct traditional validation surveys on a sample of JCAHO- accredited hospitals that is equal to at least 5 percent of all JCAHO- accredited hospitals. CMS and JCAHO commented on a draft of this report. In its comments, CMS concurred with our recommendations. JCAHO stated it had no objection to our suggestion that Congress give CMS the same authority over its hospital accreditation program as it does over other Medicare accreditation programs. However, JCAHO took issue with the methodology we used for evaluating the performance of its hospital accreditation program. CMS’s and JCAHO’s specific comments and our response follow. CMS’s comments are reprinted in appendix IV and JCAHO’s comments are reprinted in appendix V. CMS and JCAHO also provided technical comments, which we incorporated as appropriate. CMS stated that it has begun to examine the need for additional or alternative measures for the rate of disparity calculation. CMS also stated it will seek additional resources to further develop and implement new sampling and statistical methodologies that may allow results to be projected to all JCAHO-accredited hospitals, and to increase the validation sample size. CMS specifically noted that it considers life-safety code compliance, on the part of all provider types, to be critically important. In the past 8 years, in its annual reports to Congress and its dialogues with JCAHO regarding its hospital accreditation program, it has identified physical environment as an important area where JCAHO needs to focus attention, and CMS noted that 68 percent of facilities that had a deficiency finding not identified by JCAHO had them in the physical environment area. JCAHO stated that our methodology for evaluating the performance of its hospital accreditation program was incomplete and did not provide a comprehensive assessment of its program’s performance. We did not intend to do a comprehensive evaluation of JCAHO’s overall hospital accreditation program. Rather, we focused our evaluation on how well JCAHO’s hospital accreditation program ensures hospitals’ compliance with Medicare participation requirements. There are four possible outcomes to a comparison between JCAHO’s accreditation survey and a state validation survey: (1) both JCAHO and state agencies identify no deficiencies, (2) JCAHO identifies deficiencies not found by state agencies, (3) both JCAHO and state agencies identify the same deficiencies, and (4) state agencies identify deficiencies that JCAHO does not. We limited our evaluation to the fourth outcome because it illustrates the need for CMS oversight of the hospital accreditation process. We have clarified the scope of our evaluation to emphasize our focus on this outcome. JCAHO raised a concern that our characterization of JCAHO’s missed deficiencies that state survey agencies found misleads readers to believe that JCAHO misses hospitals with deficiencies 78 percent of the time. We have revised language in the report to further emphasize that the missed deficiency rate applies to hospitals in the validation survey sample in which the state survey agencies found deficiencies and cannot be generalized to all JCAHO-accredited hospitals. JCAHO further stated that our report does not take into account that JCAHO’s hospital accreditation program detects deficiencies in hospitals that CMS does not find. However, it is to be expected that state survey agencies will not find all deficiencies found by JCAHO because hospitals may have corrected the deficiencies prior to the state agency surveys. JCAHO stated that we misrepresented the potential of the new accreditation process in detecting deficiencies in Medicare COPs and provided new data regarding its first quarter 2004 performance that indicate that JCAHO surveys may have detected a greater percentage of deficiencies related to patient care compared with the pre-2004 accreditation process. However, we maintain that until CMS validation surveys for 2004 are completed, there is no basis on which to determine whether the new process improves the detection of deficiencies in Medicare COPs. In addition, JCAHO stated and we agree that evaluating and improving the quality of care in hospitals is not about counting deficiencies, it is about finding those deficiencies that, if not fixed, will generate poor results for patients and making sure that these deficiencies are remedied in a timely fashion. JCAHO stated that we mischaracterized its response to the five recommendations that CMS made in 2002 to improve JCAHO’s ability to detect deficiencies in the life safety code and that it is involved in frequent and ongoing dialogue with CMS regarding the recommendations and other life safety code issues. We have clarified language in the report regarding JCAHO’s response to CMS’s recommendations. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. We will then send copies of this report to the Secretary of Health and Human Services and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge at the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please call me at (202) 512- 7119. Another contact and key contributors are listed in appendix VI. We examined the extent to which JCAHO’s pre-2004 survey process identified hospitals with deficiencies and individual deficiencies in Medicare COPs that were identified by state survey agencies. We chose these measures because they reflect performance in detecting and correcting serious deficiencies, which according to CMS, substantially limit a hospital’s capability to render adequate care and adversely affect the health and safety of patients. We reviewed data, provided by CMS, on 500 traditional validation surveys conducted by state survey agencies during fiscal years 2000 through 2002. In these validation surveys, state survey agencies documented whether they found serious deficiencies in Medicare COPs. CMS compared state survey agency findings with JCAHO’s accreditation surveys that identified deficiencies in JCAHO’s standards. CMS then determined whether the state survey agencies’ findings on serious deficiencies in the 22 Medicare COPs that can be deemed were comparable to JCAHO’s findings on deficiencies in JCAHO’s standards in the following way. Two CMS experts such as nurses reviewed the comparability of serious deficiencies in the quality-of-care conditions identified in validation surveys to deficiencies in JCAHO’s accreditation standards identified in JCAHO’s hospital accreditation surveys. Two experts, such as building engineers, reviewed the comparability of serious deficiencies identified in the validation surveys on the condition on physical environment. Where there was disagreement, the two experts met to resolve their differences. CMS does not have written protocols for determining comparability. Experts are expected to use their best professional judgment. CMS experts also had to consider whether it is reasonable to conclude that the deficiencies existed at the time that JCAHO surveyed the hospital. For those deficiencies that CMS determines that JCAHO has failed to identify, it met with JCAHO to address disputed findings and to consider additional evidence on comparability offered by JCAHO. There are four possible outcomes to this comparison of survey findings—(1) JCAHO and state agencies both identify no deficiencies, (2) JCAHO identifies deficiencies not found by state agencies, (3) JCAHO and state agencies both identify the same deficiencies, and (4) state agencies identify deficiencies that JCAHO does not—we focused on the fourth because it highlights the need for CMS oversight of the hospital accreditation program. For the second outcome, there could be two reasons for the disparity between JCAHO’s and state survey agencies’ findings: hospitals corrected deficiencies identified by JCAHO prior to the state agency survey or the state survey agency did not identify a deficiency that existed. In addition, not all JCAHO findings are equivalent to noncompliance with a Medicare COP. From these 500 surveys, we determined the number of hospitals with serious deficiencies and the total number of serious deficiencies identified by state agencies but that CMS determined were not identified by JCAHO. These data include 123 hospitals in which state survey agencies identified one or more serious deficiencies and JCAHO did not make comparable findings according to CMS. These data also include 167 serious deficiencies identified by state agencies but that CMS determined comparable findings were not identified by JCAHO. For fiscal years 2001 and 2002, we obtained from CMS a comparison between the validation surveys conducted by the state survey agencies and the accreditation surveys conducted by JCAHO, which identified serious deficiencies identified by the state agencies but not by JCAHO as determined by CMS. For fiscal year 2000, CMS did not supply its determinations of the comparability of findings in validation and accreditation surveys for 31 of 82 serious deficiencies. We followed a protocol similar to the one used by CMS to determine the comparability of the remaining 31 serious deficiencies, which included 29 quality-of-care serious deficiencies and 2 physical environment serious deficiencies. Two analysts with nursing backgrounds compared the findings and made determinations on their comparability based on their professional judgment. In cases of disagreement, a third analyst with a background in nursing made the determination. We did not include 1998 and 1999 data in our analysis because CMS used a method that undercounted the number of deficiencies identified by state survey agencies but not identified by JCAHO. CMS did not count as deficient those cases in which state survey agencies determined that a hospital was not meeting the COP on physical environment but JCAHO determined that the hospital was in compliance because the hospital was following correction plans approved by JCAHO. To determine the potential of JCAHO’s new accreditation process in improving the detection of deficiencies in Medicare COPs, we reviewed material supplied by JCAHO on development and testing of its new process and interviewed JCAHO officials about the steps taken to test the new process and to analyze results. We also examined the features of the new accreditation process by reviewing descriptive material obtained from JCAHO and interviewing experts in health care quality. Because the new accreditation process was implemented in January 2004, we were limited in our ability to determine the effectiveness of the new accreditation process because we were not able to perform a comparative analysis of validation survey and JCAHO survey results under the new process. To examine the effectiveness of CMS’s oversight of JCAHO’s accreditation process, we analyzed the laws and regulations that define CMS’s authority and JCAHO’s authority. We reviewed the annual reports submitted to Congress on JCAHO’s performance in identifying serious deficiencies and reviewed correspondence between CMS and JCAHO and interviewed officials in both organizations. We analyzed the rate of disparity that CMS uses to determine the performance of JCAHO’s hospital accreditation process in identifying deficiencies in Medicare COPs. To evaluate CMS’s statistical methodology for the validation surveys, we interviewed CMS officials about the sampling and statistical methods. In the absence of written methodological documentation, we relied on information provided by CMS officials to evaluate the methodology. They gave us the following information about their sampling method. At the beginning of each year, CMS determines a target for the number of hospitals that will be sampled for validation surveys in each state. Each month, CMS receives a list of hospitals scheduled for a JCAHO accreditation survey in that month. Prior to sampling, CMS removes from the list those hospitals that have received a validation survey in the last 3- year accreditation cycle and hospitals that do not participate in Medicare. In the first month of the year, CMS selects a random sample of hospitals to be surveyed from JCAHO’s list. In subsequent months, CMS removes hospitals in states in which the state target has been met and then selects a random sample of hospitals. Prior to sending the list to state survey agencies, CMS determines which hospitals will receive traditional validation surveys and which will receive other types of surveys that cannot be used to assess the performance of JCAHO’s hospital accreditation program. State survey agencies must then complete traditional validation surveys within 60 days of the completion of JCAHO’s accreditation survey for the results to be used by CMS to measure the performance of JCAHO’s hospital accreditation program. According to CMS officials, the sampling procedures CMS uses are necessary because they are not informed more than 1 month in advance which hospitals JCAHO will survey for accreditation. In reviewing the sampling procedures they described, we determined that CMS initially selects a probability sample of hospitals for its state agency validation surveys. However, hospitals have varying chances of selection in the sample depending on the month in the fiscal year that JCAHO performs the accreditation survey and the number of hospitals targeted for completion that year in the state in which the hospital was located. Additionally, the way that CMS determines which type of survey the sampled hospital receives is not random. Therefore, the analysis we performed is limited to those hospitals included in the validation survey sample and cannot be projected to all JCAHO-accredited hospitals. To participate in Medicare, hospitals must maintain standards of patient safety and health that comply with Medicare requirements. There are currently 23 Medicare COPs. Table 7 provides a description of each Medicare COP. In January 2004, JCAHO introduced a new hospital accreditation process that includes several new features. Table 8 includes a description of selected new features of JCAHO’s hospital accreditation process. In addition to the contact named above, Elaine Swift, Linda Kohn, Behn Kelly, Elizabeth T. Morrison, Roseanne Price, and Marie Stetser made key contributions to this report. Medicare Home Health Agencies: Weaknesses in Federal and State Oversight Mask Potential Quality Issues. GAO-02-382. Washington, D.C.: July 19, 2002. Medicare: HCFA’s Approval and Oversight of Private Accreditation Organizations. GAO/ HEHS-99-197R. Washington, D.C.: September 30, 1999. Home Health Care: HCFA Properly Evaluated JCAHO’s Ability to Survey Home Health Agencies. GAO/HRD-93-33. Washington, D.C.: October 26, 1992. Health Care: Criteria Used to Evaluate Hospital Accreditation Process Need Reevaluation. GAO/HRD-90-89. Washington, D.C.: June 11, 1990. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Hospitals accredited by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) are considered in compliance with Medicare participation requirements. GAO examined the extent to which JCAHO's pre-2004 hospital accreditation process identified hospitals not complying with Medicare requirements, the potential of JCAHO's new process for improving the detection of deficiencies in Medicare requirements, and the effectiveness of CMS's oversight of JCAHO's hospital accreditation program. GAO analyzed CMS data on hospitals state surveyors found to have deficiencies in Medicare requirements that JCAHO surveyors did not detect, analyzed CMS's measure of JCAHO's ability to detect noncompliance with Medicare requirements, and interviewed JCAHO officials. JCAHO's pre-2004 hospital accreditation process did not identify most of the hospitals found by state survey agencies in CMS's annual validation survey sample to have deficiencies in Medicare requirements. In comparing the results of the two surveys, CMS considered whether it was reasonable to conclude that the deficiencies found by state survey agencies existed at the time JCAHO surveyed the hospital. In a sample of 500 JCAHO-accredited hospitals, state agency validation surveys conducted in fiscal years 2000 through 2002 identified 31 percent (157 hospitals) with deficiencies in Medicare requirements. Of these 157 hospitals, JCAHO did not identify 78 percent (123 hospitals) as having deficiencies in Medicare requirements. For the same validation survey sample, JCAHO also did not identify the majority--about 69 percent--of deficiencies in Medicare requirements found by state agencies. Importantly, the number of deficiencies found by validation surveys represents 2 percent of the 11,000 Medicare requirements surveyed by state agencies in the sample during this time period. At the same time, a single deficiency in a Medicare requirement can limit the hospital's capability to provide adequate care and ensure patient safety and health. Inadequacies in nursing practices or deficiencies in a hospital's physical environment, which includes fire safety, are examples of deficiencies in Medicare requirements that could endanger multiple patients. The potential of JCAHO's new hospital accreditation process to improve the detection of deficiencies in Medicare requirements is unknown because the process was just implemented in January 2004. JCAHO plans to move from using announced to unannounced surveys in 2006, which would afford JCAHO the opportunity to observe hospitals' operations when the hospitals have not prepared in advance to be surveyed. In addition, the pilot test of the new accreditation process was of limited value in predicting whether it will be an improvement over the pre-2004 process in detecting deficiencies. Limitations in the pilot test included that hospitals were not randomly selected to participate; that observers from JCAHO accompanied each surveyor, thus possibly affecting surveyors' actions; and that JCAHO evaluated the results instead of an independent entity. CMS has limited oversight authority over JCAHO's hospital accreditation program because the program's unique legal status effectively prevents CMS from taking actions that it has the authority to take with other health care accreditation programs to ensure satisfactory performance. For example, requiring JCAHO's hospital accreditation program to submit to a direct review process or placing the program on probation while monitoring its performance. Further, CMS relies on a measure to evaluate how well JCAHO's hospital accreditation program detects deficiencies in Medicare requirements that provides limited information and can mask problems with program performance, uses statistical methods that are insufficient to assess JCAHO's performance, and has reduced the number of validation surveys it conducts.
In the United States, product safety, including fire safety, is largely promoted through a process of consensus-based standards and voluntary certification programs. ANSI establishes requirements to ensure that standards are formulated through a consensus-based process that is open and transparent and that adequately considers and resolves comments received from manufacturers, the fire safety community, consumers, government agencies, and other stakeholders. Standards are generally developed in the technical committees of organizations that include independent laboratories, such as Underwriters Laboratories; and trade and professional associations, such as the American Society for Testing and Materials. These entities form a decentralized, largely self-regulated network of private, independent, standards-development organizations. For those organizations that choose to follow ANSI procedures, ANSI performs audits and investigations to ensure that standards-development organizations follow approved consensus-based procedures for establishing standards. Standards promulgated by such organizations can become part of a system of American National Standards currently listed by ANSI. Overall, according to NFPA, the U.S. standards community maintains over 94,000 active standards, both American National Standards and others. These 94,000 active standards include private sector voluntary standards as well as regulatory and procurement standards. The process of developing consensus-based standards is designed to balance the needs of consumers, federal and nonfederal regulators, and manufacturers. According to ANSI officials, new standards are commonly adopted or existing ones are frequently revised because manufacturers express a need for such actions on the basis of the development of new products. Representatives of other parties—such as regulators or consumers—may raise concerns about product safety and performance. For marketing and consumer safety purposes, product manufacturers may have their products tested at independent testing laboratories to certify that the products meet applicable product standards. This testing and certification process is called “product conformity testing and certification.” Some local, state and federal agencies require such testing and certification. For example, manufacturers of electrical home appliances have their products tested and certified by Underwriters Laboratories to enable them to attest that the products meet safety standards regarding fire, electrical shock, and casualty hazards. Alternatively, where acceptable, manufacturers can certify on their own that their products were tested and met applicable standards. Standards are also voluntarily accepted and widely used by manufacturers and regulatory agencies to provide guidance and specifications to manufacturers, contractors, and procurement officials. Each year millions of products are sold in the United States and throughout the world that bear the mark of testing organizations. Consumers, manufacturers, and federal agencies follow the very widespread, internationally recognized practice of relying on consensus standards and testing at laboratories to promote public safety. In the case of facilities and residences, the most extensive use of the standards is their adoption into model building codes by reference. Model building codes contain standards published by many organizations, including professional engineering societies, building materials trade associations, federal agencies, and testing laboratories. When erecting facilities; renovating offices; and purchasing equipment, materials, and supplies, federal agencies rely on the fire safety standards developed by private standards-development organizations. Furthermore, the federal government has historically encouraged its agencies to use standards developed by these organizations. For example, in its 1983 Circular A-119, OMB encouraged agencies to use these standards. Moreover, the National Technology Transfer and Advancement Act of 1995 requires agencies to use standards developed or adopted by voluntary consensus bodies, except when it is inconsistent with applicable law or otherwise impractical. Essentially, OMB Circular A-119 and the act direct federal agencies to use voluntary consensus standards whenever possible. They also direct federal agencies to consult with and participate, when appropriate, in standards-setting organizations and provide explanations when they do not use voluntary consensus standards in their procurement or regulatory activities. As of June 2001, according to NFPA, about 15 percent of the estimated 94,000 standards effective in the United States had been developed by civilian federal agencies. Furthermore, the Public Buildings Amendments of 1988 require GSA to construct or alter buildings in compliance with the national building codes and other nationally recognized codes to the maximum extent feasible. Federal agencies also engage in a variety of activities related to certifying that products conform to standards. For example, the National Institute of Standards and Technology publishes directories listing more than 200 federal government procurement and regulatory programs in which agencies are actively involved in procuring or requiring others to procure products meeting certification, accreditation, listing, or registration requirements. Furthermore, many federal agencies participate in the development of fire standards and product-testing procedures. For example, GSA participates on technical committees, such as those of NFPA and Underwriters Laboratories. As a result, GSA specifies numerous products and building code regulations that meet standards and testing requirements from standards-development organizations and testing laboratories. In addition, voluntary standards and the testing of products to those standards are widely accepted by other civilian federal agencies, such as the departments of Agriculture, Housing and Urban Development, the Interior, Labor, Transportation, and the Treasury as well as the Environmental Protection Agency. The federal government has no comprehensive, centralized database regarding the incidence of fires in federal facilities or the causes of such fires. According to NFPA, fires in office facilities, including federal civilian facilities, annually cause about 90 injuries and about $130 million in property damages. Although responsible for maintaining a national fire incident database and for serving as the lead agency in coordinating fire data collection and analysis, the U.S. Fire Administration does not collect data on the number of fires in federal office facilities and the causes of those fires, nor about specific types of products involved in the fires. For its part, GSA collects a minimal amount of information in the facilities for which it is responsible—about 330 million square feet in over 8,300 buildings—to determine the number and causes of fires that have occurred in the facilities. In addition, like the U.S. Fire Administration, NFPA does not gather specific information about whether a fire occurred on private or government property or whether the fire involved specific products. Thus, these databases do not contain sufficiently detailed data to allow the identification of fire incidents in federal facilities or fires associated with specific product defects. Also, the government does not have a mechanism for providing fire incident data to standards- development organizations when they consider the revision of product standards and testing procedures. As a result of a lack of detailed data collection and reporting systems, the government cannot assess the number and causes of fires in federal facilities and therefore cannot determine if any action is needed to ease the threat of fire. Certain private sector firms take steps to identify the nature of the fire threat in their facilities. For example, to help insurance companies, communities, and others evaluate fire risks, the Insurance Services Office, an affiliate of the insurance industry of the United States, maintains detailed records and performs investigations about individual properties and communities around the country, including such factors as the physical features of buildings, detailed engineering analyses of building construction, occupancy hazards, and internal and external fire protection. In addition, the Marriott Corporation, a worldwide hotel chain, maintains data on fires throughout its facilities. According to a Marriott official, Marriott uses this information to assess the risk of fire in its facilities and to take corrective actions. At the same time, the number and causes of fires in federal workspace are not known. The federal government—an employer of over two million civilian employees—does not have a system for centrally and comprehensively reporting fire incidents in its facilities and the causes of those incidents. For example, according to GSA officials, the agency-- which manages over 300 million square feet of office space--collects information on fires that cause over $100,000 in damage. However, when we requested this information, GSA could not provide it and provided examples of only two fires. According to a GSA official, GSA cancelled a requirement for its regional offices to report smaller fires to a central repository. GSA explained that it found the task of reporting smaller fires to be very labor intensive and time consuming. GSA also found that analysis of the reported information could not determine specific fire trends. Databases that are available and maintained by federal agencies—such as databases of the Department of Labor, Consumer Product Safety Commission, and U.S. Fire Administration—do not provide sufficient detail for determining the number and causes of fires in federal facilities, including the products involved in the fires. For example, according to the Department of Labor (Labor), 7 civilian federal employees died (excluding the 21 who died in forest or brush fires), and 1,818 civilian federal employees were injured while at work as a result of fires or explosions between 1992 and 1999. Although Labor gathers information about federal employees’ injuries and fatalities caused by fires, this information does not identify details, such as the cause of the fire. Furthermore, because of a lack of reporting detail, the data do not lend themselves to an analysis of what specific products may have been involved in the fire and whether the product had been certified as meeting appropriate product standards. Within Labor, OSHA’s Office of Federal Agency Programs, the Bureau of Labor Statistics, and the Office of Workers’ Compensation Programs routinely gather information about federal employee injuries and fatalities. OSHA’s Office of Federal Agency Programs, whose mission is to provide guidance to each federal agency on occupational and health issues, also collects annual injury statistics from each federal agency. These statistics are in aggregated form, however, and do not provide detail about the nature or source of the injury. The Department of Labor’s Bureau of Labor Statistics has been collecting information on federal employee fatalities since 1992 through its Census of Fatal Occupational Injuries (CFOI). This census contains information regarding work-related fatality data that the federal government and the states have gathered from workers’ compensation reports, death certificates, the news media, and other sources. According to the CFOI, between 1992 and 1999, 7 civilian federal employees were fatally injured due to fire-related incidents while working (excluding the 21 who died in brush or forest fires). Although the fatal injuries census does identify federal employee fatalities due to fires, it does not contain details about the fire, such as the cause of the fire or the types of products or materials that may have been involved in the fire. Also within the Department of Labor, the Office of Workers’ Compensation Programs maintains information about federal employees or families of federal employees who have filed claims due to work-related traumas. The office was able to provide from its database information about the claims of federal employees or their families resulting from fire- related incidents. According to the Office of Workers’ Compensation, between 1992 and 1999 1,818 civilian federal employees were injured in federal workspace as a result of fire-related incidents while working. However, this information includes data only for those federal employees who actually filed claims. Similar to CFOI data, this database does not contain additional details about the fire, such as the cause of the fire or the types of products or materials that may have been involved in the fire. The Consumer Product Safety Commission maintains a variety of data on product recalls and incidents related to consumer products. However, none of the four databases that it maintains can identify information about federal facilities or federal employees. The U.S. Fire Administration is chartered as the nation’s lead federal agency for coordinating fire data collection and analysis. However, the national fire incident databases maintained by the U.S. Fire Administration do not gather specific information about whether a fire occurred on private or government property or whether the fire involved specific products. The Fire Administration maintains the National Fire Incident Reporting System (NFIRS)—a national database through which local fire departments report annually on the numbers and types of fires that occur within their jurisdictions, including the causes of those fires. Reporting, however, is voluntary; according to the U.S. Fire Administration, this results in about one-half of all fires that occur each year being reported. In addition, the U.S. Fire Administration does not collect data on the number of fires in federal office facilities and the causes of those fires, nor about specific types of products involved in a fire. According to its comments on a draft of our report, the Fire Administration does not have the resources or authority to implement a nationwide study of fires in federal workspace. In addition to the federal databases, NFPA also maintains a national fire incident database. According to NFPA, between 1993 and 1997, an average of 6,100 fires occurred per year in federal and nonfederal office space, resulting in an average of 1 death, 91 injuries, and $131.5 million in property damage per year. NFPA’s estimates are based on information that fire departments report to the Fire Administration’s NFIRS system and on information from NFPA’s annual survey. NFPA annually samples the nation’s fire departments about their fire experiences during the year; using this data, NFPA projects overall information about fires and their causes to the nation as a whole. However, neither the U.S. Fire Administration nor NFPA gathers specific information about whether a fire occurred on private or government property or whether the fire involved specific products. In the past, the federal government has collected data regarding fires occurring on federal property. The Federal Fire Council was originally established by Executive Order within GSA in 1936 to act as an advisory agency to protect federal employees from fire. The council was specifically authorized to collect data concerning fire losses on government property. However, the council moved to the Department of Commerce in 1972 and was abolished in 1982. Along with manufacturers, consumer representatives, fire safety officials, and others, the federal government is one of several important stakeholders involved in the standards-development process. However, as previously discussed, the government does not consistently and comprehensively collect information on fire incidents in federal facilities, and hence it cannot systematically provide these data to standards- development organizations for consideration during revisions of standards. Furthermore, some federal agencies may be slow to respond to information about failures of certain products, including those products intended to suppress fires. In at least one case, a fire sprinkler product that failed in both the work place and the testing laboratory, as early as 1990, continued to be used in federal facilities, and it has only recently been replaced at some facilities. This case is discussed below. Omega sprinklers were installed in hundreds of thousands of nonfederal facilities and in about 100 GSA-managed buildings. In 1990, a fire occurred at a hospital in Miami, FL, resulting in four injuries. During this fire, Omega sprinklers failed to activate. Through 1998, at least 16 additional fires occurred, during which Omega sprinklers failed to work, including a May 16, 1995, fire at a Department of Veterans Affairs hospital in Canandaigua, NY. During the New York fire, an Omega sprinkler head located directly over the fire failed to activate. Losses resulting from these and other fires were estimated at over $4.3 million (see table 1). Although none of the fires reported in table 1 occurred in Fairfax County, VA, the County fire department became concerned that many of the sprinklers were installed in public and private facilities in the county. Throughout the mid-1990s, by publicizing its concerns about the sprinklers, the County fire department contributed to the widespread dissemination of information about the sprinklers in the media. In addition, tests performed in 1996 at independent testing laboratories— Underwriters Laboratories and Factory Mutual Research Corporation— revealed failure rates of 30 percent to 40 percent. On March 3, 1998, the Consumer Product Safety Commission announced that it had filed an administrative complaint against the manufacturer, resulting in the October 1998 nationwide recall of more than 8 million Omega sprinklers. The agency began investigating Central Sprinkler Company’s Omega sprinklers in 1996 when an agency fire engineer learned about a fire at a Marriott hotel in Romulus, MI, where an Omega sprinkler failed to activate. After identifying that there was a hazard that warranted recalling the product, the Commission staff sought a voluntary recall from Central. Unable to reach such an agreement with Central, the agency’s staff were authorized to file an administrative complaint against the company. Moreover, the Commission attempted to coordinate with other federal agencies, such as the Department of Veterans Affairs and GSA. The Department of Veterans Affairs participated in the recall in accordance with the terms of the Commission’s settlement agreement with the manufacturer. GSA officials stated that they became aware of the problems associated with Omega sprinklers in 1996 after hearing about them from the news media and Fairfax County Fire Department officials. GSA began a survey to identify the 100 GSA-managed buildings that contained the sprinklers. It also pursued an agreement with the manufacturer, resulting in a 1997 negotiated settlement for the replacement of some 27,000 devices in GSA- controlled buildings. Officials from OSHA stated that they were unsure about when they became aware of the problems associated with Omega sprinklers. An agency official explained that OSHA generally does not monitor information regarding problems with specific products, except for Consumer Product Safety Commission recalls. According to OSHA, it checks such recalls only informally and within the limited context of one of its programs, but not as a part of its primary compliance efforts. In addition, according to OSHA officials, when OSHA did find out about the Omega sprinklers problems, it took no action because such problems are outside the agency’s jurisdiction unless the problems involve noncompliance with applicable OSHA requirements. According to an OSHA official, OSHA does issue “Hazard Information Bulletins” that could potentially contain information about failures of specific products. However, these bulletins do not generally duplicate Consumer Product Safety Commission recall information and do not generally concern consumer products. Federal facilities not controlled by GSA—including those of Capitol Hill (the House of Representatives, the Capitol, the Senate, and the Library of Congress) and the Smithsonian Institution—have either recently replaced or are just now replacing the defective Omega sprinklers. According to an official of the Architect of the Capitol, although the facility’s management was aware of the problems with the sprinklers, it continued using them because of cost considerations. At the time our review was completed, the Architect of the Capitol had removed and replaced the Omega sprinklers from all of the House of Representatives buildings and Capitol buildings, most of the Senate buildings, and one of the Library of Congress’ buildings. The Architect of the Capitol was also in the process of replacing them in the remainder of the Senate and Library buildings. In addition, according to the Chief Fire Protection Engineer of the Smithsonian, agreement for a free-of-cost replacement of the Omega sprinklers has been reached, although the process of replacing them had not begun at the time we completed our work. At your request, we also reviewed concerns about the extent to which information technology equipment—such as computer printers, monitors, and processing units—could be a source of fires in offices, homes, and other places, including federal workspace. A private testing laboratory in Sweden recently performed experiments that suggested that some types of information technology equipment could be subject to damage from flames that originate from external sources. In response to these concerns, the Information Technology Industry Council convened a panel of stakeholders—including the Consumer Product Safety Commission, Underwriters Laboratories, and others—to study the issue. The panel found that information technology equipment did not pose a widespread fire threat in the United States. According to the representatives of the American Chemistry Council, the threat of information technology equipment fires from external sources is mitigated by the presence of various types of flame retardants in the casings of this equipment. Moreover, representatives of the Information Technology Industry Council stated that the industry has a policy of making its equipment as safe as possible for consumers. They agreed, however, that the issue of the flammability of information technology equipment needed further study. Fires, even relatively small ones, can have tragic and costly consequences. Knowing the numbers and types of fires in workspace, as well as the causes of fires and any products involved, is critical for understanding the extent of the risk of fire and can lead to identification and implementation of steps to reduce this risk. Some private sector organizations—for example, a major hotel chain and some insurance organizations—track the number of fires in different types of facilities and their causes. Such information is used to manage this risk and reduce property damage, injuries, and the loss of life. However, the federal government, which employs over two million people in space that GSA and other agencies manage, collects very limited information on fires and lacks information on the risk of fires in its workspace. Without more complete information on fires, the federal government—a key player in the standards- development process—cannot provide timely information on the causes of fires in federal facilities to standards-development organizations for their use in developing and revising standards, testing procedures, and certification decisions. Collecting and analyzing data on the risk of fire in its workspace could enable the government to better protect its employees and enhance its ability to participate in producing standards that would better protect the public at large from fire. We recommend that the Administrator, U.S. Fire Administration, in conjunction with the Consumer Product Safety Commission, GSA, OSHA, and other federal agencies that the Fire Administration identifies as being relevant, examine whether the systematic collection and analysis of data on fires in federal workspace is warranted. If they determine that data collection and analysis are warranted, data that should be considered for collection and analysis include: the number of fires in federal workspace; property damage, injuries, and deaths resulting from such fires; and the causes of these fires, including any products involved. In addition, the agencies should discuss, among other topics deemed relevant, the availability of resources for implementing any data collection system and any needed authority to facilitate federal agencies’ cooperation in this effort. We provided copies of a draft of this report to the heads of the Federal Emergency Management Agency’s Fire Administration and GSA, as well as the Consumer Product Safety Commission and the Department of Labor. Because of its role in testing Omega sprinklers, we also provided a copy of the report to Underwriters Laboratories. Although Underwriters Laboratories had no comments on the draft, the other recipients of the draft provided comments via E-mail. These comments, and our responses to them, are discussed below. In commenting on our draft report, the Director of the Fire Administration’s National Fire Data Center agreed in principle with our recommendation by stating that Fire Administration officials would gladly meet with GSA and others to examine whether specialized data collection is warranted. We welcome the Fire Administration's proposal. In addition, the Fire Administration listed several obstacles to the creation of a complete and accurate fire incident reporting system: (1) its lack of resources, (2) its lack of authority to require other federal agencies to report fires, and (3) its lack of on-site management and control over an existing fire incident reporting system, the National Fire Incident Reporting System (NFIRS). Moreover, the Fire Administration does not specifically collect data on the number and causes of fires in federal office facilities, and no indication exists that the fire problem in federal facilities differs significantly from the overall national fire experience in similar workplace environments. We agree that data on federal fires are not currently collected, and we would cite this lack of information as a significant reason for exploring the need for a system to report the number and causes of fires in federal space. We further agree that a lack of resources, of authority to compel fire incident reporting, and of management over reporting may pose serious obstacles to improved fire incident reporting; therefore, we urge that the Fire Administration address these factors with other agencies when it meets with them to discuss the need for more specialized reporting on fires in federal work space. GSA senior program officials commented on a draft or our report. They requested that we delete a statement in our draft report that GSA could not provide us with complete information on fires that caused over $100,000 damage in federal facilities it manages. GSA said that our statement was not germane. We declined to make this change because the statement is germane to our discussion about a lack of information on fires in the federal workplace. GSA’s inability to provide the information we requested serves to illustrate this very point. In addition, we added information in our report regarding GSA’s explanation that it had cancelled a previous requirement for its regional offices to report smaller fires to a central repository. GSA explained that such reporting was labor intensive and time consuming, and analyses of this information could not yield specific fire trends. We agree with GSA that some reporting requirements may be labor intensive, time consuming, and not helpful. Therefore, in our view, as stated above and as reflected in our recommendation, the Fire Administration should address these factors with GSA and other agencies when it meets with them to discuss the need for more specialized reporting on fires in federal work space. GSA did not comment on the recommendation in the draft of our report. In addition, Department of Labor officials provided technical and clarifying comments, all of which we incorporated into our report. However, they did not comment on the recommendation. The Department of Labor’s Bureau of Labor Statistics Assistant Commissioner, Office of Safety and Health, provided additional data regarding the number of federal employees who died as a result of fires or explosions from 1992 through 1999, clarifying that most of these fatalities occurred outside of federal buildings. The Department’s Occupational Safety and Health Administration’s Acting Director for Policy provided additional information, which we incorporated into our report, about the extent of its involvement in the Omega sprinkler case and the rationale for the actions it took. The Consumer Product Safety Commission stated that its comments were editorial in nature, and we revised our report to incorporate these comments. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to the cognizant congressional committees; the Administrator, General Services Administration; the Chairman, Consumer Product Safety Commission; the Secretary of Labor; and the Administrator, Federal Emergency Management Agency. We will also make copies available to others on request. If you have any questions about this report, please contact me at (202) 512- 4907. Key contributors to this report were Geraldine Beard, Ernie Hazera, Bonnie Pignatiello Leer, Bert Japikse, and John Rose. Our report (1) provides information on the federal government’s reliance on private voluntary fire standards and testing products against those standards and (2) discusses whether data that are available about fire incidents and their causes in civilian federal facilities are sufficient to protect federal workers from the threat of fire. To examine the government’s reliance on fire safety standards and testing, we reviewed policies and procedures regarding how standards-setting organizations and independent laboratories establish fire safety standards and test products, as well as the roles of federal agencies and other interested parties in these processes. We contacted standards- development organizations, including Factory Mutual Research, Underwriters Laboratories, Southwest Research Institute, the American National Standards Institute (ANSI), and the American Society for Testing and Materials. We also obtained information regarding how testing and standards-setting laboratories and organizations consider fire incident data and other information about fire hazards when revising fire safety standards and testing procedures. We obtained and analyzed regulatory and statutory criteria regarding the federal role in fire safety standards and testing. We interviewed federal officials from the General Services Administration (GSA), the National Institute of Standards and Technology, the U.S. Fire Administration, the Consumer Product Safety Commission, and the Department of Labor, as well as officials from standards- development organizations. We also interviewed fire protection officials, including officials from the International Association of Fire Fighters, the International Association of Fire Chiefs, and the Fairfax County, VA, Fire Department to obtain information on setting standards and testing products. To examine whether data are available about incidents and causes of fires in civilian federal facilities, we contacted GSA, the manager of about 40 percent of all civilian, federal office space. However, GSA does not routinely collect information about all fires that occur in federal facilities. Therefore, we obtained and analyzed fire protection incident data from the Fire Administration and the National Fire Protection Association (NFPA). The U.S. Fire Administration maintains the National Fire Incident Reporting System, which is the world’s largest national annual database of fire incident information. State participation is voluntary, with 42 states and the District of Columbia providing reports. The data in the National Fire Incident Reporting System comprise roughly one half of all reported fires that occur annually. NFPA annually surveys a sample (about one- third) of all U.S. fire departments to determine their fire experiences during the year. NFPA uses this annual survey together with the National Fire Incident Reporting System to produce national estimates of the specific characteristics of fires nationwide. Through a review of the databases, we found that there was not sufficient detail to determine which of the fires reported occurred in federal facilities. In addition, the fire departments do not document the name brands of any product that might have been involved in a fire. However, NFPA was able to provide information about fires that have occurred in office space (federal and nonfederal) from 1993 through 1998. Finally, we did not conduct a reliability assessment of NFPA’s database or the National Fire Incident Reporting System. We also attempted to determine the number of civilian federal employees who may have been injured or killed as a result of a fire-related incident while at work. In this regard, we obtained information from the Bureau of Labor Statistics’ Census of Fatal Occupational Injuries (CFOI) regarding civilian federal employee fatalities from 1992 through 1999. The federal government and the states work together to collect work-related fatality data from workers’ compensation reports, death certificates, news stories, and other sources for CFOI. All 50 states participate in CFOI. The Bureau of Labor Statistics was able to provide information from CFOI describing the number of civilian federal employees fatally injured due to fire-related incidents while at work. We also obtained information from the Office of Workers’ Compensation Programs from 1992 through April 2001 regarding civilian federal employees or their families who have filed for workmen’s compensation as a result of an injury or fatality due to a fire-related incident while at work. However, the data represent only those incidents for which a civilian federal employee or the family filed a claim. With the limited data available from the fatal injuries census and Office of Workers’ Compensation Programs, we were unable to do an analysis of the number of claims filed due to bombings, such as the April 1995 Murrah Federal Building bombing in Oklahoma City, OK, and the August 1998 bombing of the U.S. Embassy in Dar Es Salaam, Tanzania. In addition, according to CFOI, the fatality data do not include fatalities due to bombings, such as the Oklahoma City bombing and the Dar Es Salaam bombing. When a fatality is reported, CFOI requires that Assaults and Violent Acts, Transportation Accidents, Fires, and Explosions reports take precedence in the reporting process. When two or more of these events occur, whoever inputs the information selects the first event listed. The Bureau of Labor Statistics classified the Oklahoma City bombing deaths as homicides under the Assaults and Violent Acts category. In addition, the Office of Workers’ Compensation Programs was able to provide information on the number of injuries to civilian federal employees that its Dallas District Office reported for 1995 as resulting from explosions. According to the Office of Workers’ Compensation Programs, it is likely that many of these injuries resulted from the Oklahoma City bombing. Furthermore, the databases do not contain any details of fires. We used the fatality data from CFOI, because it is the more comprehensive source of federal employee fatality information. Finally, we did not conduct a reliability assessment of the Bureau of Labor Statistics’ CFOI database or the database of the Office of Workers’ Compensation Programs. We also obtained information about fire incidents related to consumer products by contacting the Consumer Product Safety Commission. The Commission maintains several databases that allow it to conduct trend analyses of incidents involving various types of products, including the National Electronic Injury Surveillance System, a Death Certificate File, the Injury or Potential Injury Database, and the In-Depth Investigation File. In addition, the Commission maintains a library (paper files) of information on products that have been recalled. However, none of these sources contained information that would identify information about federal facilities, federal employees, or product brand names, with the exception of those that have been recalled. To examine the quality and limitations of these data, we reviewed relevant documents and interviewed officials from organizations that compile and report the data, including the National Fire Protection Association, Fire Administration, Consumer Product Safety Commission, Occupational Safety and Health Administration, Bureau of Labor Statistics, Office of Workers’ Compensation Programs, and National Institute of Standards and Technology. As requested, we examined details about reporting incidents and concerns involving Omega sprinkler heads and how standards-development organizations, federal agencies, and others responded to reports about the failures of these devices. We contacted officials from, and in some cases obtained documentation from, the Fairfax County (VA) Fire Department. We also contacted various federal regulatory agencies or agencies that used or were indirectly involved in using Omega sprinklers, including GSA, the Consumer Product Safety Commission, Occupational Safety and Health Administration, National Institute of Standards and Technology, Architect of the Capitol, Smithsonian Institution, and Department of Veterans Affairs. We also contacted officials from various laboratories that had tested Omega sprinklers, including Underwriters Laboratories, Factory Mutual, and the Southwest Research Institute. We also interviewed officials from the Marriott Corporation, which, along with Fairfax County, had publicized the problems associated with the sprinklers. As requested, we also reviewed concerns about the possible flammability of information technology equipment. In this regard, we inquired and obtained information about such factors as the types of flame retardants currently used in the casings of information technology equipment and concerns about the environmental and health impacts of these substances, the standards used to mitigate the flammability of information technology equipment, and the tests used to determine the flammability of this equipment. Our sources of information were the American Chemistry Council; the Great Lakes Chemistry Council; the Information Technology Industry Council; the National Association of State Fire Marshals; SP (a private testing laboratory in Sweden); the National Fire Protection Association; Underwriters Laboratories; and federal agencies, including the U.S. Consumer Product Safety Commission and the U.S. Department of Commerce’s National Institute of Standards and Technology. We conducted our work from December 2000 through August 2001 in accordance with generally accepted government auditing standards. American Society for Testing and Materials Southwest Research Institute Underwriters Laboratories, Inc.
Developing fire protection standards and testing products against them are critical to promoting fire safety. Business offices, including federal facilities, experience thousands of fires, more than $100 million in property losses, and dozens of casualties each year. Knowing the number and types of fires in the workplace, as well as their causes, is critical to understanding and reducing fire risks. Some private-sector groups track the number and causes of fires in different types of buildings. Such information is used to manage risk and reduce property damage, injuries, and deaths. However, the federal government collects little information on the fire risks in its facilities. As a result, the federal government cannot provide standards-development organizations with timely information that could be used to develop or revise fire safety standards, testing procedures, and certification decisions. Collecting and analyzing such data would help the government to better protect its employees and would contribute to the production of better standards to protect the public from fire.
Bank capital performs several important functions. Among other things, capital acts as a financial cushion to absorb unexpected losses, promotes public confidence in the solvency of the institution and the stability of the banking sector, and provides protection to depositors and deposit insurance funds. Because of capital’s role in absorbing losses, promoting confidence, and protecting depositors, federal banking regulations require banking organizations to maintain adequate capital, and regulators set minimum capital levels to help ensure that institutions do so, including a target total minimum risk-based capital ratio—that is, the ratio of capital to risk-weighted assets. Federal law authorizes banking regulators to take a variety of actions to ensure capital adequacy, including informal and formal enforcement actions. Federal banking regulators generally expect institutions to hold capital at levels higher than regulatory minimums. Capital rules in the United States generally follow a framework of measures adopted by the Basel Committee. U.S. federal banking regulators have adopted various risk-based capital regimes over the past decades. Under these frameworks, assets and off-balance-sheet exposures are assigned to one of several broad risk categories according to the obligor (for example, the person or legal entity contractually obligated on an exposure), or if relevant, the guarantor or the nature of the collateral. Banking organizations multiply the aggregate dollar amount or exposure amount in each risk category by the risk weight associated with that category. The resulting risk-weighted amounts from each of the risk categories are added together, and generally this sum is the banking organization’s total risk-weighted assets, which comprises the denominator of the risk-based capital ratio. For example, a $1,000 on- balance-sheet asset at a 20 percent risk weight would equal $200 in risk- weighted assets. An additional $1,000 on-balance-sheet asset at a 50 percent risk weight would equal $500 in risk-weighted assets, for a total of $700 in risk-weighted assets (compared to the $2,000 in total assets). The risk weights enable one to calculate the amount of capital a banking organization would need to hold for a given asset—its “capital charge”—in order to meet the minimum risk-based capital ratio requirements. To meet an 8 percent minimum total capital ratio requirement, the organization with the $700 in risk-weighted assets in the previous example would need to hold $56 in capital ($700×0.08). The minimum total capital charge for the $1,000 on-balance-sheet asset that was risk-weighted at 20 percent would be $16 ($1,000×0.2×0.08), while the minimum capital charge for the $1,000 on-balance-sheet asset that was risk-weighted at 50 percent would be $40 ($1,000×0.5×0.08). Risk weights for mortgages and other mortgage-related assets have been included in a number of these regulatory capital frameworks over the years, including the following: The Basel Capital Accord (Basel I), which was adopted in 1988 and implemented in the United States in the early 1990s, established a system of generally applicable risk weights for specific assets (including mortgage-related assets) to calculate total risk-weighted assets and defined a minimum total risk-based capital ratio (the ratio of regulatory capital to risk-weighted assets) of 8 percent with limited exceptions. Under this system, all assets of a certain category—for example, commercial loans—were assigned a flat risk weight without regard for differences in credit quality among the assets in that category (simple risk-bucket approach). Asset categories were classified into one of four risk-weight buckets—0 percent, 20 percent, 50 percent, or 100 percent. Amendments to the U.S. federal banking regulators’ rules adopted in 2001 implemented a multilevel, ratings-based approach to assess capital requirements on asset securitizations— including mortgage- backed securities (MBS)—based on their relative exposure to credit risk. The approach used credit ratings from nationally recognized statistical rating organizations (NRSRO) to measure relative exposure to credit risk and determine the associated risk-based capital requirement. In 2007, U.S. federal banking regulators adopted capital rules for large internationally active banking organizations that were based on a revised framework published by the Basel Committee in 2006 (Basel II). Only large, internationally active banks—banks with consolidated total assets (excluding assets held by an insurance underwriting subsidiary of a bank holding company) of $250 billion or more or with consolidated total on-balance-sheet foreign exposure of $10 billion or more—were required to adopt the advanced approaches for measuring risk (including mortgage-related credit risk) established in the Basel II-based rules. Under these rules, the advanced internal ratings-based approach used risk parameters determined by a bank’s internal systems as inputs into a formula developed by supervisors for calculating minimum regulatory capital and expanded the use of credit ratings to measure credit risk. U.S. federal banking regulators promulgated a final rule in 2013 to incorporate many of the changes included in the Basel III framework. Among other changes, the final rule includes a new standardized approach for credit risk to replace the Basel I generally applicable risk-based capital rule. The final rule also removed references to credit ratings that were in the Basel I generally applicable rule and the advanced internal ratings-based approach, consistent with requirements in Section 939A of the Dodd-Frank Act. Other entities that hold mortgages and mortgage-related assets have different capital requirements. For example, in 1992 the Office of Federal Housing Enterprise Oversight, which at the time was the regulator for the enterprises, adopted minimum capital requirements based on the enterprises’ on-balance-sheet assets and off-balance-sheet obligations. Nonbank financial institutions that service mortgages for the enterprises and Ginnie Mae must comply with minimum capital and net worth requirements those entities have issued. The Basel III final rule adopted in 2013 by the U.S. federal banking regulators and generally effective as of January 2015 incorporates higher risk weights for certain mortgage-related exposures while leaving others unchanged (see table 1). For example, the risk weights for most single- family residential mortgages are largely unchanged by the final rule. However, the final rule changed the risk weights for some mortgage- related securitization exposures and for mortgage servicing assets. Risk weights for residential first-lien mortgages on one-to-four family properties that are held in banks’ portfolios have remained largely unchanged since the adoption of the Basel I-based rules. Under the standardized approach outlined in the Basel III-based final rule, the portions of mortgages that are conditionally guaranteed by U.S. government agencies, such as the Federal Housing Administration or the Department of Veterans Affairs, are assigned 20 percent risk weights under the standardized approach—essentially unchanged since Basel I. Other mortgages—and the portions of mortgages not guaranteed by U.S. government agencies—secured by one-to-four family residential properties are assigned a 50 percent risk weight, provided that such loans are: secured by a property that is either owner-occupied or rented; made in accordance with prudent underwriting standards, including standards relating to the loan amount as a percentage of the appraised value of the property; not 90 days or more past due or carried in nonaccrual status; and not restructured or modified (other than through the Department of the Treasury’s Home Affordable Modification Program). Also, if a banking organization holds the first-lien and junior-lien residential mortgage exposures, and no other party holds an intervening lien, the institution must combine the exposures and treat them as a single loan secured by a first lien to determine the loan-to-value ratio and assign a risk weight. Banking organizations are required to assign a 100 percent risk weight to a first-lien residential mortgage exposure that does not meet the criteria previously listed and to junior-lien residential mortgage exposures if the banking organization does not hold the first lien on the property. The advanced internal ratings-based approach requires banking organizations to use a formula defined in regulation to determine the capital requirements for residential mortgage exposures, which are grouped into segments that have similar (homogeneous) risk characteristics. The formula for the capital charge for nondefaulted residential mortgage exposures uses values for the probability of default and loss given default that each bank derives from its internal systems (see app. I). For example, applying a probability of default of 3 percent and losses given default of 20 percent to a segment of nondefaulted residential mortgages would result in a risk weight of about 50 percent using this formula. This formula is unchanged since it went into effect in 2008. Under this advanced approach, defaulted residential mortgage exposures that are covered by an eligible U.S. government guarantee have a capital charge of 1.6 percent for the portion that is covered by the guarantee—an implicit 20 percent risk weight (0.016 ÷ 0.08 = 0.2). The previous rules did not include a separate provision for defaulted residential mortgage exposures covered by a government guarantee. Defaulted residential mortgage exposures not covered by an eligible U.S. government guarantee have a capital charge of 8 percent, which implies a risk weight of 100 percent. The standardized approach and the advanced approach both assign a risk weight of 50 percent to pre-sold construction loans with a legally binding sales contract unless the purchase contract is cancelled, in which case a banking organization must assign a 100 percent risk weight. The Basel III-based final rule keeps the risk weights for government- and enterprise-guaranteed MBS outlined under the previous rules but changes how the risk weights for private-label securities are calculated. Under the standardized approach, residential MBS guaranteed by Ginnie Mae have a risk weight of 0 percent, while residential MBS issued and guaranteed by Fannie Mae and Freddie Mac have a risk weight of 20 percent—unchanged since Basel I. As shown previously in table 1, risk weights for other MBS that qualify as securitization exposures can range from 20 percent to 1,250 percent under either the standardized or the advanced approaches. Previously, risk weights for securitization exposures could be as low as 7 percent under the advanced approach. Under the current rules, banks using the advanced internal-ratings based approach must also apply the standardized approach. If banks are unable to use the formulas and approaches defined in the final rule for the standardized and advanced approaches—for example, because they do not have data to calculate all the inputs—they must apply the 1,250 percent risk weight to their securitization exposures. The Basel III final rule establishes two methods for calculating risk weights for securitization exposures under the standardized approach: The simplified supervisory formula approach relies on objective inputs to calculate risk weights for securitization exposures using a formula. To use this approach, a bank needs to know the performance of the underlying assets. The “gross-up” approach involves calculating an amount of capital for the bank’s exposure as well as for the portion of more senior exposures (that is, the least risky tranches, which are given priority for repayment), if any, for which the bank’s exposure provides support. If a bank does not have access to the inputs required to calculate the simplified supervisory formula approach, or prefers a simpler approach, the gross-up approach can be applied. Banks that are subject to another rule, the market risk rule, must use the simplified supervisory formula approach. Banks that are not subject to this other rule may choose to use either method but must use the same method across all exposures. The simplified supervisory formula approach takes into account the weighted average capital charge for the underlying exposures, delinquency level of the underlying collateral, and the relative size and seniority of the security in the securitization structure (see app. II for the details of the formula). The current balances of all the underlying exposures in the securitization structure are used to calculate the attachment and detachment points for each level, or tranche, of the structure. Losses are first borne by the lowest tranches. Once a tranche experiences a total loss, the next tranche immediately senior to it begins to bear any additional losses. The following hypothetical securitization structure illustrates the simplified supervisory formula approach for a securitization backed by a pool of residential mortgages that would be risk-weighted at 50 percent. For such a securitization, the typical capital charge of the underlying mortgage pool is 4 percent (50 percent risk weight multiplied by 8 percent minimum risk- based capital ratio). Assuming that 5 percent of the mortgages in the pool are delinquent and have a risk weight of 100 percent (for an 8 percent capital charge), the weighted average capital charge of the mortgage pool would be 4.2 percent. The risk weights for each of the tranches can be calculated using the formula outlined in the final rule and described in appendix II. The risk-weight results for a pool that does not involve resecuritizations are shown in table 2. If 10 percent of the underlying mortgage pool is delinquent, the weighted average capital charge would be 4.4 percent, and the tranche risk weights for the lower (more risky) tranches would increase, reflecting the increased likelihood of losses on these tranches. In both scenarios, the most senior tranches would have risk weights of 20 percent. Risk weights for other securitization exposures, such as real estate mortgage investment conduits, can also be calculated using the simplified supervisory formula approach. Similarly, the credit risk transfer transactions the enterprises have engaged in with various investors would be treated as securitization exposures for banks that hold these notes. In most cases, these notes would have risk weights at or near 1,250 percent due to the holders of the notes being in or near the first-loss position. Previously, under the Basel I-based rules that most banks were subject to through 2014, some private-label securitization exposures were assigned general risk weights while others were given risk weights based on credit ratings. For example, privately issued MBS backed by mortgages that would qualify for the 50 percent risk weight would receive a 50 percent risk weight, subject to certain conditions. Mortgage securitization exposures—including direct credit substitutes, recourse, and residual interests—that were externally rated were assigned risk weights between 20 percent (for long-term credit ratings of AAA or AA) and 200 percent (for BB ratings, which indicate higher risk). The gross-up approach in the Basel III-based rule is the same as an approach that was available under the previous rules. To calculate risk- weighted assets under the gross-up approach, a banking organization determines three inputs along with the exposure amount: the pro rata share, the enhanced amount, and the applicable risk weight. The pro rata share is the par value of the banking organization’s exposure as a percentage of the par value of the tranche in which the securitization exposure resides—for example, a $5,000 exposure in a $10,000 junior tranche of one-to-four family residential mortgages would be a 50 percent pro rata share. The enhanced amount is the par value of all tranches that are more senior to the tranche in which the exposure resides. These more senior tranches are “enhanced”—that is, their credit profiles are improved by—the subordinated tranches. If the total securitization in the previous example is $100,000, the par value of all tranches that are more senior to the tranche in which the bank has an interest is $90,000. The applicable risk weight is the weighted-average risk weight of the underlying exposures in the securitization as calculated under the standardized approach. For mortgages not guaranteed by the federal government (as in the previous example), the underlying exposures can have risk weights of either 50 percent or 100 percent. The weighted-average risk weight would be 75 percent if half of the total amount of underlying mortgages had a risk weight of 50 percent and the remaining underlying exposures had a risk weight of 100 percent. For the previous example, assume the weighted-average risk weight of the underlying exposures is 50 percent (that is, all the mortgages meet all of the requirements discussed previously, such as not 90 days past due and not restructured or modified). The risk weight would then be applied to the bank’s interest—$5,000—plus the pro rata share of the more senior tranches—50 percent of $90,000. In other words, $50,000 ($5,000 plus $45,000) would be multiplied by the 50 percent risk weight. The result, $25,000, is equivalent to a 500 percent risk weight on the bank’s $5,000 exposure. The bank would then multiply the $25,000 by the minimum capital requirement of 8 percent to determine that the capital requirement for the $5,000 exposure is $2,000. For the advanced internal ratings-based approach, the Basel III-based final rule eliminates the credit ratings-based approach that had been the primary method for determining risk weights for securitization exposures under the previous rules. The credit ratings-based approach assigned risk weights to securitization exposures based on their seniority in the securitization structure and the effective number of exposures (as determined by a formula), as well as their external ratings from one or more NRSROs or inferred ratings. These risk weights ranged from 7–20 percent for the highest investment grade (long-term) securitization exposures (e.g., AAA) to 35–100 percent for the lowest investment grade exposures (e.g., BBB+, BBB, BBB-) to 250–650 percent for exposures one category below investment grade (e.g., BB+, BB, BB-). The primary method for banks using the advanced internal ratings-based approach to calculate capital requirements for securitization exposures is the supervisory formula approach. The supervisory formula approach was an alternative method banks could use under the Basel II-based rules for securitization exposures for which there was no applicable external or inferred credit rating. It has remained largely unchanged from the Basel II- based rules with one significant exception: a 20 percent risk-weight floor, rather than a 7 percent risk-weight floor, now applies to all securitization exposures. The supervisory formula approach requires banks to calculate several input parameters on an ongoing basis. These include the exposure’s credit enhancement level and thickness; the exposure-weighted average loss given default for the underlying exposures to the securitization transaction and the effective number of underlying exposures (both determined by formulas specified in the final rule); and the capital requirements for the underlying exposures, such as those for residential mortgage exposures described earlier. If banks using the advanced internal ratings-based approach are unable to calculate the inputs for the supervisory formula approach, they may use the simplified supervisory formula approach described earlier. The Basel III-based final rule changes the treatment of mortgage servicing assets by (1) lowering the cap on the amount of mortgage servicing assets that can be included in capital calculations, which reflects, in part, the uncertainty regarding the ability of banking institutions to realize value from these assets, especially under adverse financial conditions, and (2) increasing the risk weights. Mortgage servicing assets are the contractual rights owned by a banking organization to service (for a fee) mortgage loans that are owned by others. Under previous rules, banks included up to 90 percent of fair value or 100 percent of book value of mortgage servicing assets in their capital calculations, whichever was lower, and mortgage servicing assets were subject to 100 percent risk weight. In contrast, the final rule caps the recognition of mortgage servicing assets at 10 percent of the common equity component of tier 1 capital. Mortgage servicing assets exceeding the 10 percent threshold must be deducted from common equity. Deductions from common equity reduce the numerator in banks’ calculations of their risk-based capital ratios. Mortgage servicing asset amounts that are not deducted from common equity are currently subject to a 100 percent risk weight, and that risk weight will increase to 250 percent beginning January 1, 2018, when the phase-in period ends. The future increase in the risk weight will increase risk-weighted assets, the denominator for banks’ risk-based capital ratios. As a result, both the stricter cap and higher risk weight on mortgage servicing assets will reduce banks’ risk-based capital ratios, making the required minimum level more difficult to maintain. The final rule does not apply to the enterprises or nonbank financial institutions. The enterprises’ previous regulator, the Office of Federal Housing Enterprise Oversight, established risk-based capital requirements that were defined by stress test scenarios rather than fixed risk weights. A simulated stress test was used to project the enterprises’ financial performance over a 10 year period and measure capital adequacy, or the amount of capital required to survive a prolonged period of economic stress without new business or active risk management action. When the enterprises were placed in conservatorship in 2008, their new regulator, the Federal Housing Finance Agency, suspended the enterprises’ capital requirements. Nonbank financial institutions such as nonbank mortgage servicers follow capital requirements established by Ginnie Mae and the enterprises in order to service loans these entities guaranteed, but these requirements do not involve risk weights. The minimum capital requirements that are currently in place or that have been proposed are shown in table 3. The full impact of the changes to capital requirements for holdings of mortgage-related assets remains uncertain because insufficient time has passed since these changes took effect for both banks and nonbank mortgage servicers, and for some assets the changes have not yet been fully phased in. However, our past work suggested that—based on analysis of data included in banks’ Consolidated Reports of Condition and Income (commonly referred to as Call Reports) and Credit Union 5300 Call Reports—many lenders generally appeared to be participating in residential mortgage lending much as they had in the past. In addition, data on mortgage debt outstanding published by the Federal Reserve indicate that holdings of mortgage debt for one-to-four family properties have remained consistent with trends that predate the 2014–2015 changes in risk weights (see fig. 1): (1) mortgage debt held or guaranteed by the enterprises holding steady, (2) mortgage debt held by depository institutions also holding steady, (3) mortgage debt held or guaranteed by Ginnie Mae continuing to increase slightly, and (4) mortgage debt backing private-label securities continuing its steady decline. But increased risk weights for some mortgage-related assets, among other factors, can have potential implications for banks’ decisions about securitizing and servicing mortgages and investing in MBS. Securitizing mortgages creates exposures that may carry increased risk weights for banks. These exposures can include mortgage servicing assets, recourse obligations, and residual interests. As discussed previously, the cap on the amount of mortgage servicing assets that can be included in capital calculations is now lower than it previously was—10 percent of common equity rather than either 90 percent of fair value or 100 percent of book value. Banks that exceed the cap may need to hold more capital for these assets. In addition, mortgage servicing assets will have a higher risk weight under the final rule beginning in 2018, which will also increase the capital required for these assets. Fewer banks may want to retain these servicing rights and instead may seek to sell them to nonbank financial institutions that are not subject to the final rule. The total amount of mortgage servicing assets held by banks peaked in 2009 and generally has been decreasing in subsequent years, according to data compiled by SNL Financial. In 2016 reports on mortgage servicing and nonbank servicers, we found that the share of U.S. residential mortgages serviced by nonbank servicers increased from approximately 6.8 percent in the first quarter of 2012 to approximately 24.2 percent in the second quarter of 2015, while the share serviced by the largest nationwide, regional, and other banks decreased from about 75.4 percent to about 58.6 percent over the same period. Banking organizations may face recourse obligations (for example, to repurchase mortgages they have sold to others) when loans default within a certain period after the sale. Banks reported a decreasing amount of residential mortgages serviced for others with recourse, according to data compiled by SNL Financial, but this trend predates the Basel III-based final rule. Under current credit risk retention rules, banks must retain an interest equal to at least 5 percent of the credit risk for mortgage-backed securities they sponsor unless the security qualifies for an applicable exemption, including being exclusively backed by mortgages that meet the definition of a “qualified mortgage.” These residual interests are included in risk-weighted assets and may carry higher risk weights under the Basel III-based rules than under the previous rules, depending on how they are structured. Residual interests that are structured as a vertical slice of the securitization structure (meaning the sponsor retains a 5 percent interest in each tranche) would have a lower risk weight than residual interests that are structured as a horizontal slice of the securitization (meaning the sponsor retains the most junior 5 percent interest in the securitization structure). Few transactions involving the securitization of mortgages that do not meet the qualified mortgage definition have been completed since the rule went into effect. Instead, banks may be electing to hold these mortgages in their portfolios or to originate loans that meet the qualified mortgage definition, including those that can be sold to the enterprises. The increase in banks’ holdings of securities backed by the enterprises could be evidence of this latter scenario, as banks often deliver loans to the enterprises in exchange for enterprise MBS. Holdings of enterprise MBS would have lower risk weights than holding whole loans in portfolio (whether or not they meet the qualified mortgage definition). Finally, the Basel III-based final rule largely left unchanged the historically lower risk weights of MBS guaranteed by the enterprises vis-à-vis other mortgage-related assets, which can influence the demand for these securities relative to whole loans and privately issued MBS. While holdings of MBS guaranteed by Ginnie Mae retain a 0 percent risk weight and holdings of MBS guaranteed by the enterprises retain a 20 percent risk weight under the standardized approach (and their treatment under the advanced approach has not changed), holdings of privately issued MBS may face higher risk weights than under the prior rules. For privately issued MBS that have an external credit rating, the minimum weight increased from 7 percent to 20 percent for banks subject to the advanced approach, and the junior tranches of these MBS are likely subject to higher risk weights in part because credit ratings can no longer be used to calculate risk weights. According to data compiled by SNL Financial, banks’ holdings of MBS guaranteed by Ginnie Mae and the enterprises have been increasing while their holdings of other residential MBS have been decreasing, but these trends predate the changes to risk weights for privately issued MBS. We sought and received technical comments on a draft of this report from the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency, and Federal Deposit Insurance Corporation and incorporated their comments and feedback into the final report. We are sending copies of this report to the appropriate congressional committees and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or evansl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Under the advanced internal ratings-based approach, residential mortgages are included in retail exposures. Generally, a banking organization must calculate retail risk-weighted asset amounts in four distinct phases: 1. Phase 1 – Categorization of exposures During this phase, the banking organization determines which of its exposures are wholesale exposures, retail exposures, securitization exposures, or equity exposures. Retail exposures are further categorized as residential mortgage exposures, qualifying revolving exposures, or other retail exposures. 2. Phase 2 – Segmentation of retail exposures During the second phase, a banking organization must group the retail exposures in each retail subcategory into segments that have homogeneous risk characteristics. A banking organization must also segment defaulted retail exposures separately from nondefaulted retail exposures. 3. Phase 3 – Assignment of risk parameters to segments of retail exposures During phase 3, the banking organization must associate a probability of default (PD), a loss given default (LGD), and an exposure at default (EAD) to each segment of retail exposures such as residential mortgages. 𝐾𝐾=𝐿𝐿𝐿𝐿𝐿𝐿 ×𝑁𝑁�𝑁𝑁−1(𝑃𝑃𝐿𝐿)+√R×𝑁𝑁−1(0.999) �−(𝐿𝐿𝐿𝐿𝐿𝐿×𝑃𝑃𝐿𝐿) where K represents the capital charge; N(.) means the cumulative distribution function for a standard normal random variable; N-1(.) means the inverse cumulative distribution function for a standard normal random variable; and R is the non-defaulted exposures correlation factor. For residential mortgage exposures, R is set equal to 0.15 in the Basel III- based final rule. 𝐾𝐾=0.2 ×𝑁𝑁�𝑁𝑁−1(0.03)+√0.15×𝑁𝑁−1(0.999) √0.85 𝑁𝑁−1(0.03)=−1.88079 𝑁𝑁−1(0.999)=3.09023 𝐾𝐾=0.2 ×𝑁𝑁�−1.88079+0.3873×3.09023 0.92195 𝐾𝐾=0.2 ×𝑁𝑁(−0.74185)−(0.006) 𝑁𝑁(−0.74185)=0.22909 𝐾𝐾=0.2 ×0.22909−(0.006) 𝐾𝐾=.03982,or approximately 4% �−(0.2×0.03) �−(0.006) The capital charge, K, reflects the risk weight multiplied by the minimum capital requirement. Therefore, dividing the capital charge of approximately 4 percent by the minimum capital requirement of 8 percent produces a risk weight of about 50 percent. −22.3214(0.0604−0.0104) −22.3214(0.0304−0) In addition to the contact named above, Karen Tremba (Assistant Director), Don Brown (Analyst-in-Charge), M’Baye Diagne, Rachel DeMarcus, Marc Molino, Jennifer Schwartz, and Tyler Spunaugle made significant contributions to this report.
During the 2007–2009 financial crisis, many banking organizations lacked capital of sufficient quality and quantity to absorb substantial losses on mortgages and mortgage-related assets, revealing these assets to be riskier than previously thought. In response to the crisis, banking regulators around the world moved to strengthen requirements for capital adequacy. In the United States, the Dodd-Frank Wall Street Reform and Consumer Protection Act introduced, among other things, new capital requirements for bank holding companies and savings and loan holding companies. Internationally, in December 2010 the Basel Committee on Banking Supervision (which had issued the Basel I and Basel II frameworks) issued the Basel III framework—a comprehensive set of reforms to strengthen global capital and liquidity standards—with the goal of promoting a more resilient banking sector. Under this framework, banks apply risk weights to different assets to determine the amount of capital they need to meet regulatory requirements. GAO was asked to explain how capital requirements for a mortgage depend upon how it is financed and how the requirements have changed since the crisis. This report examines the risk weights for residential mortgages and certain other mortgage-related assets under the U.S. Basel III-based rule and how they compare to those in effect under prior capital regimes and for nonbank entities. GAO examined information on capital requirements from current and past rules. GAO received technical comments from the banking regulators, which were incorporated as appropriate. Rules for capital adequacy require banks to hold a percentage of their assets as capital to act as a financial cushion to absorb unexpected losses. Under current rules, banks must hold capital equal to at least 8 percent of risk-weighted assets. Since the early 1990s, U.S. federal banking regulators have used a risk-weighting system under which banks multiply asset amounts by factors, known as risk weights, to calculate risk-weighted assets. Different types of assets have different risk weights that attempt to capture the assets' relative risk. The Basel III-based final rule adopted in 2013 by the U.S. federal banking regulators incorporates higher risk weights for certain mortgage-related assets while leaving others unchanged from prior capital regimes (Basel I and Basel II). Most banks use the standardized approach for calculating risk-weighted assets, but large internationally active banks use an advanced approach that relies on formulas established by the regulators and inputs from their internal systems. Under the standardized approach, the risk weights for single-family residential mortgages are largely unchanged by the final rule. Similarly, the risk weights under this approach for residential mortgage-backed securities (MBS) guaranteed by Ginnie Mae, Fannie Mae, and Freddie Mac have not changed since Basel I. Under the advanced approach, large internationally active banks use a formula defined in regulation to determine the capital requirements for residential mortgage exposures, which include whole loans as well as MBS guaranteed by Ginnie Mae, Fannie Mae, and Freddie Mac. This formula has not changed since it went into effect in 2008 under the Basel II-based rule. For both approaches, the ways for determining risk weights for securitization exposures and mortgage servicing assets have changed under the final rule, which may increase these risk weights. As required by the Dodd-Frank Wall Street Reform and Consumer Protection Act, the final rule eliminates the use of credit ratings for determining risk weights for securitization exposures, instead relying on regulator-established formulas. Also, the final rule reduces the cap on mortgage servicing assets that can be included in capital calculations and will raise the risk weight from 100 percent to 250 percent. The Basel III-based final rule largely left in place the historically lower risk weights of MBS guaranteed by Fannie Mae and Freddie Mac vis-à-vis other mortgage-related assets, which can influence the demand for these securities relative to whole loans and privately issued MBS. However, the full impact of changes in risk weights for holdings of mortgage-related assets remains uncertain because insufficient time has passed since these changes took effect, and for some assets the changes have not yet been fully phased in. GAO's recent work suggested that many lenders generally appeared to be participating in residential mortgage lending much as they had before capital requirements changed. Also, data on mortgage debt outstanding and on banks' holdings of different assets indicate that trends in holdings of mortgage debt and mortgage-related assets that predate the changes in risk weights have continued. But increased risk weights for some mortgage-related assets may lead to changes in banks' decisions about securitizing and servicing mortgages.
Air Force travel card delinquency rates and amounts charged off were substantially lower than non-Air Force DOD components, and delinquencies were about 1 percent higher than non-DOD federal civilian agencies. Cumulative Air Force charge-offs since the inception of the travel card program with Bank of America in November 1998 are approximately $11.6 million, the lowest of the three services. Our analysis of available data showed that the travel cardholder’s rank and pay rate are strong predictors of delinquency problems. We found that the Air Force’s delinquency and charge-off problems are primarily associated with low and mid-level enlisted military employees. As discussed in following sections of this report, improvements in the Air Force’s overall control environment improved Air Force delinquency rates, but DOD’s overall high delinquency and default rates resulted in contentious relations with Bank of America. The bank threatened to end its participation in the program, but eventually agreed to contract modifications that included increased fees. Past delinquencies and charge-offs have cost the Air Force, the federal government, and the taxpayers thousands of dollars in lost rebates, and substantial resources spent pursuing and collecting on past due accounts. We also estimate that contract modifications will cost the Air Force millions of dollars in the future due to higher fees. The Air Force has taken a number of positive actions to address its delinquency and charge-off rates, and data for the first half of fiscal year 2002 show a significant drop in charged-off accounts. For example, this reduction is, in part, attributable to a salary and military retirement offset program—similar to garnishment—which was initiated in November 2001. Other Air Force actions included encouraging the use of the split disbursement payment process, in which the Defense Finance and Accounting Service (DFAS) sends a portion of the traveler’s reimbursement directly to the bank rather than the cardholder, and increased management attention and focus on the delinquency issue. However, except for split disbursements, Air Force actions primarily address the symptoms or back- end result of delinquency and charge-offs after they have already occurred. As noted in the following sections of this report, additional emphasis on front-end management of the travel card program, such as more selective procedures for issuing the cards and overseeing the proper use of the cards, could further improve the Air Force travel card program. As of March 31, 2002, approximately 8,000 Air Force cardholders had over $5 million in delinquent debt. Over the last 2 years, Air Force delinquency rates fluctuated from 5 to 11 percent and on average were about 5 percentage points less than the Army’s and the Navy’s and 1 percentage point higher than non-DOD federal civilian agencies. The Air Force has set a goal of no more than a 4 percent delinquency rate. As discussed later, greater emphasis on commander responsibility and accountability, contributed, at least in part, to lower Air Force delinquency rates. Figure 1 compares delinquency rates among the Air Force, Non-Air Force DOD, and the 23 largest civilian agencies. In addition, as shown in figure 2, Air Force travel card delinquency rates for the eight quarters ending March 31, 2002, were significantly less than Army and Navy travel card delinquency rates. Further analysis revealed that Air Force travel card delinquency rates have decreased from 16.9 percent as of December 31, 1999 to 6.0 percent as of March 31, 2002. Table 1 shows the decrease in Air Force delinquency rates since December 1999, as well as the cyclical nature of Air Force travel card delinquency rates. Since the inception of the travel charge card task order between DOD and Bank of America on November 30, 1998, Bank of America has charged off about 9,000 Air Force travel card accounts with nearly $11.6 million of bad debt. While not an excellent track record, it is lower than the Army’s approximate 23,000 charged-off accounts valued at nearly $34 million and Navy’s approximate 13,800 charged-off accounts valued at nearly $16.6 million. Task order modifications during fiscal year 2001 allowed Bank of America to institute a salary offset provision against DOD military personnel whose travel card accounts were previously charged off or were more than 120 days past due. Table 2 provides a comparison of cumulative charge-offs and delinquencies by military service as of March 31, 2002. Our analysis showed a correlation between certain demographic factors and high delinquency and charge-off rates. Available data showed that the travel cardholder’s rank or grade (and associated pay) is a strong predictor of delinquency problems. As shown in Figure 3, Air Force delinquency and charge-off problems are primarily associated with low- and midlevel enlisted military personnel in grades E-1 (airman) to E-6 (technical sergeant), with relatively low incomes and little experience in handling personal finances. Appendix IV presents information on military and civilian grades and pay rates. Available data indicate that military personnel grades E-1 to E-6 account for about 69 percent of all Air Force military personnel. These enlisted military personnel have basic pay levels ranging from $11,500 to $27,600. These individuals were responsible for 41 percent of the total outstanding Air Force travel card balances as of September 30, 2001. Figure 4 compares the delinquency rates by military grade and civilian personnel to the average Air Force delinquency rate as of September 30, 2001. As shown, the delinquency rates were as high as 15.7 percent for E-1 to E-3 and 9.9 percent for E-4 to E-6, compared to the Air Force overall delinquency rate of 6.2 percent. These rates were markedly higher than the rates for officers, which was 2.4 percent. These rates were also substantially higher than that of Air Force civilians, which at 3.6 percent was 1.4 percentage points lower than the federal civilian agencies rate shown in figure 1. The delinquency rate for military personnel in grades E-4 to E-6 in particular had an important negative impact on the Air Force’s delinquency rate. Specifically, these are senior airmen to technical sergeants in the Air Force. Pay levels for these personnel, excluding supplements such as housing, range from $18,600 to $27,600. As shown by Bank of America data, personnel in grades E-4 to E-6 accounted for 37 percent of the total Air Force outstanding balance. High delinquency rates for the E-1 through E-6 grades combined with their extensive use of the travel card have a significant impact on the Air Force wide delinquency rate. Figure 5 shows Air Force fiscal year 2001 charge-offs. Charge-off amounts of about $2.6 million for military personnel in grades E-1 through E-6 accounted for 79 percent of the $3.3 million in total Air Force charge-offs in fiscal year 2001. An Air Force travel card program official told us that a major factor of the service’s travel card delinquencies relates to first-term enlisted personnel. An Air Force member can normally attain the E-4 grade within 3-1/2 years in his or her first term. According to Air Force data, over half of the personnel in grades E-1 to E-6 are in grades E-4 and below. The official commented that if the members are not committed to an Air Force career and plan to serve only one tour, temptation exists to misuse the card before they separate from the Air Force. In addition, as discussed below, the Air Force did not exempt personnel with poor credit histories from required use of travel cards. Consequently, these low and mid-level enlisted military personnel are often issued travel cards even though they may already be in serious financial trouble and, therefore, may not have been appropriate credit risks. As shown in table 3, five Air Force major commands accounted for about 63 percent of the Air Force travel card delinquencies as of March 31, 2002. Air Force National Guard and Air Force Reserve Command officials attributed their high delinquent balances to the recent activation of guard and reserve forces, the associated increase in travel card use, and inadequate employee training on travel voucher preparation. In addition, the officials explained that National Guard and Reserve forces that report to duty intermittently may not become aware of problems with travel voucher accuracy and late submission of payment vouchers until they report for their next duty assignment—several days to a month after a problem has occurred. Further, the officials told us that many of their members have not been trained on proper travel voucher preparation procedures, and controls over travel card use and payment of travel card bills are weak. One reserve official cited the lack of specific guidance for disciplinary action in DOD’s Financial Management Regulation as a contributing factor. According to Air Force officials, the Air Combat Command, Air Force Materiel Command, and Air Mobility Command have all experienced significant increases in travel and deployments since September 11, 2001. Our audit work showed instances in which extended travel and back-to- back deployments resulted in delays in travel voucher preparation and submission. To reduce delinquencies associated with late payment of travel card bills by deployed units, the Air Force has emphasized the use of the split disbursement payment process and interim travel vouchers. Delinquencies and charge-offs within DOD have resulted in increased costs to the Air Force and the other services. In fiscal year 2001, DOD entered into an agreement with Bank of America to adjust the terms of its travel card contract. DOD agreed to increased fees and a change in rebate calculation. These changes cost the Air Force about $350,000 in lost rebates on individually billed accounts and centrally billed accounts in fiscal year 2001, and could cost an estimated $1.6 million in increased ATM fees annually. Other costs are real but not easily measurable, such as the increased administrative burden to the Air Force to identify and address delinquent accounts. Unexpectedly high defaults by DOD’s travel cardholders resulted in a 5- month legal dispute with Bank of America over the continuation of the travel card contract. In 1998, under the provisions of the General Services Administration’s (GSA) master contract with Bank of America, DOD entered into a tailored task order with Bank of America to provide travel card services for a period of 2 years, ending November 29, 2000. Under the terms of the task order, DOD had three 1-year options to unilaterally renew the contract. On September 29, 2000, prior to the expiration of the initial task order, DOD gave notice to Bank of America that it intended to exercise its option to extend the task order for an additional year. In November 2000, Bank of America contested the provisions of the DOD task order with the GSA contracting officer. Bank of America claimed that the task order was unprofitable due to required “contract and program management policies and procedures” associated with higher-than-anticipated credit losses, including an estimated 43,000 DOD employees had defaulted on more than $59 million in debts. Consequently, in April 2001, the master contract and the related DOD tailored task order for travel card services were renegotiated. Specifically, Bank of America was able to increase its revenue by instituting additional fees, such as higher cash advance and late payment fees; offsetting credit losses against rebates as explained later; facilitating the collection of delinquent and charged off amounts through salary and military retirement pay offset; and encouraging DOD personnel participation in split disbursements, in which the government sends part or all of the travel voucher reimbursements to Bank of America directly. One of the terms of the renegotiated task order was that, effective August 10, 2001, the travel card cash advance fee would be increased from 1.9 percent to 3 percent, with a minimum fee of $2. The Air Force reimburses all cash advance fees related to authorized cash withdrawals. We estimate that this contract modification will result in approximately $1.6 million of increased costs to the Air Force each year. Our estimate was made by applying the new fee structure that went into effect in mid-August 2001 to cash advances made during fiscal year 2001. Other fee increases agreed to in the renegotiation, such as the fee for expedited travel card issuance, will also result in additional cost to the Air Force. The GSA master contract modification also changed the rebate calculation, making it imperative that the Air Force (and the other services) improve their payment rates to receive the full benefits of the program. Under the GSA master contract, credit card companies are required to pay a quarterly rebate, also known as a refund, to agencies and GSA based on the amount charged to both individually billed and centrally billed cards. The rebate to the agency is reduced, or eliminated, if significant numbers of an agency’s individual cardholders do not pay their accounts timely. Specifically, credit losses or balances that reach 180 calendar days past due reduce the rebate amounts. Effective January 2001, the contract modification changed the way that rebates are calculated and how credit losses are handled. If the credit loss of an agency’s individually billed travel card accounts exceeds 30 basis points—or 30 one-hundredths of a percent (.003)—of net sales on the card, the agency is assessed a credit loss fee, or rebate offset, against the rebate associated with both individually billed and centrally billed travel card accounts. This credit loss fee, or rebate offset, which resulted solely from individually billed account losses, significantly affected the amount of rebate the Air Force received as a result of combined individually and centrally billed net sales in fiscal year 2001. In fiscal year 2001, the Air Force collected about $1.4 million of the $1.8 million in rebates that we estimated it would have received, based on fiscal year 2001 dollar volume if the individually billed account payments had been timely. Other costs, such as the administrative burden of monitoring delinquent accounts, are harder to measure, but no less real. For example, employees with delinquent accounts must be identified, counseled and disciplined, and their account activity closely monitored. In addition, employees with financial problems who have access to sensitive data may pose a security risk, as discussed later in this report. In addition to having the lowest net charge-off amount of the three services, $6.9 million, the quarterly dollar amount of Air Force accounts charged off has decreased substantially. As shown in figure 6, at the start of fiscal year 2001, the charged off balance greatly exceeded the recovery amount. Starting in the third quarter of fiscal year 2001, the amount charged off started to decline so that in the first quarter of fiscal year 2002, recoveries, for the first time, exceeded the amounts being charged off. Recoveries also exceeded charge-offs in the second quarter of fiscal year 2002. The institution of the salary and military retirement offset program has contributed to the reduction in Air Force travel card charge-offs, primarily by eliminating the need to charge off past due balances by transferring these balances to the salary off-set program. Starting in fiscal year 2002, DOD began to offset the retirement benefits of military retirees and the salaries of certain civilian and military employees against the delinquent and charged off balances on travel card accounts. The DOD salary offset program implements a provision of the Travel and Transportation Reform Act of 1998 (TTRA) that allows any federal agency, upon written request from the travel card contractor, to collect by deduction from the amount of pay owed to an employee (or military member) any amount of funds the employee or military member owes on his or her travel cards as a result of delinquencies not disputed by the employee. The salary and military retirement offset program was implemented DOD-wide. The offset program came into being as part of the task order modification. Between April and August 2001, DOD and the Bank of America worked together to establish program protocols. Starting in August 2001, the Bank of America sent demand letters to cardholders whose accounts were more than 90 days delinquent. The Defense Finance and Accounting Service processed the initial offsets of delinquent accounts in October 2001 in the various DOD pay systems. The first deductions were made from the November pay period and paid to Bank of America starting December 2001. Figure 6 illustrates the initial impact salary offset had in the first quarter of fiscal year 2002. The Bank of America can also use the offset program to recover amounts that were previously charged off. January 2002 was the first month in which Bank of America requested offsets for such accounts. The effect, shown in figure 6, was recoveries amounting to over three times more than charge-offs for the second quarter of fiscal year 2002. The offset program works as follows. When an account is 90 days delinquent, Bank of America may send a demand letter to the individual cardholder requesting payment in full within 30 days. The demand letter specifies that salary offsets will be initiated if payment is not made in full within 30 days. The cardholder may negotiate an installment agreement or dispute the charges with the bank. The cardholder has a right to review all records such as invoices and to request a hearing if the bank’s disposition of the dispute is not satisfactory. After the 30 days have elapsed, if payment is not made and the cardholder does not dispute the debt, the bank includes the account in the list of accounts that it sends to DFAS requesting offsets. Individuals in the following categories may not be accepted for offset. Civilian employees in bargaining units that have not agreed to the salary- offset program do not qualify for the program. According to a DFAS official, 1,002 of 1,227 DOD bargaining units had agreed to participate in the program as of July 2002. Individuals with debts to the federal government or other garnishments already being offset at 15 percent of disposable pay are considered to be in protected status and are not eligible for the offset program. Individuals who cannot be located in the various payroll and military retirement (active, reserve, retired military, or civilian) systems cannot be accepted for offset. Civilian retirees. The authorizing statutes for both the Civil Service Retirement System and the Federal Employee’s Retirement System in effect at the time of our audit specified that retirement benefits may be offset only to the extent expressly authorized by federal statutes. TTRA, Section 2, provided authority to offset salaries of “employees” of agencies but does not provide such authority for civilian employee retiree annuitants. Once an individual is accepted for offset, the related debt is established in the appropriate pay system and DFAS can deduct up to 15 percent of disposable pay. Disposable pay is defined in GSA’s Federal Travel Regulation as an employee’s compensation remaining after the deduction from an employee’s earnings of any amounts required by law to be withheld (e.g., tax withholdings and garnishments). The amounts collected are paid to the bank on a monthly basis for military personnel and retirees and biweekly for civilian personnel. It takes approximately 2 months from the time an offset is initiated to the first bank payment. According to DFAS, from October 2001 through July 2002, Bank of America referred 53,462 DOD-wide cases with debt of $77.5 million to DOD for offset. DOD accepted and started offset for 74 percent of the cases and 69 percent of the debt amounts referred. The number and debt amount of Air Force-specific cases forwarded by Bank of America were not available. From November 2001 through July 2002, DFAS collected $2.7 million from active and retired Air Force military personnel through the offset program. During the same period, DOD collected $1.6 million from all DOD civilian employees. However, DFAS was unable to provide this amount by military service. We found that Air Force management encouraged a culture that emphasized the importance of integrity and ethical values and was involved in monitoring travel card delinquencies. According to travel card program officials and documentation we obtained, Air Force officials, from the Vice Chief of Staff to wing commanders, have strongly emphasized for the past 2 to 3 years that the travel card program is a “commander’s program” and commanders are responsible for managing their delinquency rates. They explained that officials throughout the Air Force chain of command have monitored travel card delinquency rates and discussed the topic at their respective staff meetings. Documentation we obtained confirmed the use of detailed statistical reports to monitor installation-level delinquencies. Commanding officers are holding unit commanders with excessive delinquency rates accountable to make improvements to reduce delinquencies. Travel card delinquency statistics are discussed at command staff meetings, and unit commanders are held accountable for reducing their delinquencies. The importance of the tone at the top cannot be overstated. Other factors contributing to the reduction in Air Force delinquency rates include the following. Air Force emphasis on financial management training. Each Air Force installation has a Financial Services Office with a trained financial management staff that oversee the travel card program. The Air Force also provides personal financial training to all inductees, which includes developing personal budget plans, balancing checkbooks, preparing tax returns, and financial responsibility. The training also covers disciplinary action and consequences for financial irresponsibility by service members. The Air Force also provides financial counseling and training classes through the Family Services Centers at each base and contracts for professional counselors and trainers. Travel card program audits. The Assistant Secretary of the Air Force (Financial Management and Comptroller) requested Air Force Audit Agency audits of the travel card program, which resulted in recommendations to management and resultant program improvements. According to a DOD Inspector General report, the Air Force Audit Agency issued 27 audit reports on the travel card program from fiscal year 1999 through fiscal year 2001. For example, in April 2001, the Air Force Audit Agency issued an audit report on Travis Air Force Base (AFB), one of the sites we audited. The report identified numerous systemic problems, including inadequate agency program coordinator (APC) oversight due to insufficient training, which resulted in unauthorized transactions not being identified. The Air Force Audit Agency made numerous recommendations for corrective actions, and our audit work showed that Travis AFB had taken actions on many of them. DOD and Air Force initiatives. In March 2000, Air Force travel card delinquency rates were in the double digits—10.2 percent—similar to the Army and Navy delinquency rates. The Air Force initiated a number of actions in the fall of 2000 to reduce its delinquency rate. For example, in December 2000, Air Force headquarters sent an E-mail message to travel card APCs asking them to (1) promote the split disbursement payment process, (2) turn off accounts for infrequent travelers, (3) use Bank of America Electronic Account Government Ledger System (EAGLS) reports to monitor and detect problem accounts, (4) include procedures to deactivate the travel card when a member changes duty location, and (5) correct discrepancies between organizational codes assigned to cardholder accounts and their current assigned units to ensure accurate reporting and effective monitoring of accounts. As an aid in correcting organizational coding, the E-mail included a directory for APCs to use to resolve problems with accounts that were incorrectly assigned to them—referred to as “orphan” accounts—by identifying where those accounts should be properly assigned. Further, in response to June and September 2001 DOD policy memorandums to heads of military departments, the Air Force identified 100,000 travel cards for cancellation due to lack of use. According to an Air Force headquarters official, approximately 90,000 travel cards were cancelled in October 2001. In addition, salary offset procedures were implemented in November 2001, resulting in a significant decrease in charged-off accounts in the first 6 months of fiscal year 2002. Also in November 2001, the Air Force Comptroller issued a letter to all major commands, highlighting the use of the split disbursement payment process and interim vouchers as options for preventing delinquent balances when members are on long-term deployments. According to Bank of America data, the Air Force increased the number of payments remitted to Bank of America via the split disbursement payment process from 20,487 payments, or 17 percent of all payments, totaling $12 million in October 2000, to 54,337 payments, or 39 percent of all payments, totaling $44 million in June 2002. Officials at the sites we audited told us that they emphasized that cardholders use the split disbursement payment process. For example, Hill Air Force Base comptroller personnel told us that they have increased use of the split disbursement payment process from 23 percent during the fourth quarter of fiscal year 2001 to 35 percent during the third quarter of fiscal year 2002. In addition, as of September 17, 2002, Travis AFB implemented a new policy that made the split disbursement process the default, or automatic, payment method for all active duty military employees who use the government travel card with the provision that if an employee chooses not to use the split disbursement payment method, approval from the unit commander or first sergeant is required. While the Air Force has made improvements in its control environment that have resulted in lower delinquency rates than the Army’s and the Navy’s, additional improvements could further reduce Air Force delinquency rates. In addition, similar to our Army and Navy findings, control environment weaknesses contributed to significant potential fraud and abuse of the Air Force travel card. Many of the problem cases that we reviewed were due to ineffective controls over the issuance of travel cards and the transfer or cancellation of accounts when individuals moved to other duty locations, separated, or retired. We also found that improvements are needed in the assignment and training of APCs. The Air Force’s ability to prevent potentially fraudulent and abusive transactions that can eventually lead to additional delinquencies and charge-offs is significantly weakened if individuals with histories of financial irresponsibility are permitted to receive travel cards. Although the DOD policy provides that all DOD personnel are to use the travel card to pay for official business travel, the policy also provides that exemptions may be granted under a number of circumstances, including financial irresponsibility. However, DOD’s policy is not clear as to what level of financial irresponsibility by a travel card applicant would constitute a basis for such an exemption. The Air Force’s practice is to facilitate the issuance of travel cards—with few credit restrictions—to all applicants regardless of whether they have histories of credit problems. We found no evidence that the Air Force exempted any individuals or groups from required acceptance and use of travel cards, even those with histories of severe credit problems. DOD’s Financial Management Regulation provides that credit checks be performed on all travel card applicants, unless an applicant declines the conduct of a credit check. In July 1999, Bank of America began conducting credit checks on DOD travel card applicants and used the resulting information as a basis for determining the type of account— restricted or standard—it would recommend for new DOD travel applicants. DOD policy also permits APCs to raise the credit and ATM limits on restricted cards based on travel requirements. Our analysis of credit application scoring models and credit risk scores used by major credit bureaus confirmed that applicants with low credit scores due to histories of late payments are poor credit risks. Credit bureau officials told us that if their credit rating guidelines for decisions on commercial credit card application approvals were used to make decisions on travel card applicants, a significant number of low- and mid-level enlisted Air Force cardholders would not even qualify for the restricted limit cards. A credit history showing accounts with collection agency action or charge-offs poses an even higher credit risk. Any of these problems can be a reason for denying credit in the private sector. However, in DOD, individuals with no credit history, or little credit history, are generally issued restricted cards with lower credit limits. Credit industry research and the results of our work demonstrate that individuals with previous late payments are much more likely to have payment problems in the future. As discussed in this report, many of the Air Force travel cardholders that we audited who wrote numerous NSF checks, had severe prior financial problems, including accounts charged off, histories of delinquencies and charge-offs relating to other credit cards, and accounts in collection, or numerous bankruptcies. DOD Financial Management Regulation, Volume 9, Chapter 3. The regulation further provides that individuals who do not consent to a credit check may only receive a restricted card. In response to similar findings in our audit of the Army travel card program and an amendment proposed by Senators Byrd and Grassley, the Congress included a provision in the Department of Defense Appropriations Act for fiscal year 2003 requiring the Secretary of Defense to evaluate whether an individual is creditworthy before authorizing the issuance of any government charge card. If effectively implemented, this requirement should improve delinquency rates and reduce potential fraud and abuse. We found numerous examples in which the APCs failed to deactivate or close accounts when cardholders retired, were dismissed, or separated from the service, or the APCs failed to take the proper action to transfer accounts when employees were reassigned to other Air Force locations. The Air Force lacks sufficient guidance and management focus in this area. DOD’s Financial Management Regulation requires APCs to terminate travel cards when cardholders die, retire, or are dismissed or separated from DOD. Bank of America has issued procedural guidance for transferring and terminating cardholder accounts. However, we found instances in which failure to follow these procedures—specifically with respect to travel card transfer and termination—resulted in travel card abuses and charge-offs. The cardholders benefited by using the travel cards to purchase a variety of goods and services for their personal use. Some did not pay their monthly bills, thereby essentially obtaining personal items for no cost. The following examples illustrate the effect of not taking appropriate actions to transfer, deactivate, or close travel card accounts. A Langley AFB APC failed to close an enlisted member’s account after the individual left the service. The member left the service in January 2001, but continued to use his card until March 2001. Because the card was not canceled immediately upon the member’s separation, the account remained open with a $5,000 credit limit allowing the member to charge unauthorized ATM withdrawals and purchases. The member was not disciplined because he had already left the service. The APC stated that she was not aware of the misuse of the travel card until the account was charged off in April 2002 with an unpaid balance of $3,729. At Hill AFB, a senior airman (E-4) transferred to Yokota Air Base, Japan, in July 2001. The APC was unaware that the individual had transferred until his travel card account appeared as delinquent on the Bank of America reports. The APC deactivated the card in September 2001 and made repeated, unsuccessful attempts to contact the individual and the APC at Yokota Air Base. In January 2002, Bank of America placed the account totaling $1,918 in salary offset. Although the individual had continued to appear on Hill AFB delinquency reports, Hill AFB officials could not take any disciplinary action because the individual was no longer assigned to them. The account was eventually transferred from Hill to Yokota Air Base in March 2002. According to EAGLS data, the individual issued two nonsufficient fund (NSF) checks to Bank of America in March and April 2002 in payment of his account. Bank of America closed the account in June 2002. Brooks AFB travel card officials failed to cancel the travel card account when a civilian employee (GS-13) separated from the service in January 2000 and began working for a private contractor. The civilian continued to use his travel card after separation, charging over $17,000 in unauthorized purchases. The charges included approximately $1,000 in cash advances and several charges for an on-line dating service. The cardholder was not disciplined for the abuse because he had separated from the service. Information from EAGLS shows that the account was closed on September 13, 2002, and as of October 25, 2002, the account had an unpaid balance of approximately $1,600, which had not yet been charged off. We found a lack of emphasis on APC training and inadequate monitoring of APC training at two of our three case study locations—Nellis AFB and Travis AFB. As in our Army and Navy travel card audits, we found that Air Force APCs had excessive responsibilities. For example, APC duties were being assigned as collateral duties and certain APCs were responsible for as many as 1,200 accounts. We also found excessive turnover associated with military APCs at Nellis AFB and Travis AFB. GAO’s internal control standards state that management’s commitment to competence and good human capital practices are critical factors in establishing and maintaining a strong internal control environment. Specifically, our standards state that management should identify appropriate knowledge and skills required for various jobs and should provide needed training. The standards also state that establishing appropriate human capital practices, including hiring, training, evaluating, counseling, and disciplining personnel, is another critical control environment factor. The emphasis on APC training varied across the three case study sites. Nellis AFB did not have a control mechanism in place to help ensure that all APCs received appropriate training and Travis AFB did not train APCs in a timely manner. Specifically, Travis AFB APCs told us that they did not receive timely training on how to access and use Bank of America EAGLS data to monitor travel card activity when they were assigned APC duties. However, we determined that Hill AFB had a mechanism in place to monitor APC training, and it provided that training in a timely manner. DOD policy provides that travel card training materials are to be distributed throughout the department and that APCs are to be informed of policy and procedural changes relating to the travel card program. However, neither DOD nor Air Force-wide procedures detail requirements for the extent, timing, and documentation of travel program training for APCs. APCs are not required to receive training on the duties of the position or on how to use available Web-based tools and reports from Bank of America before they assume their APC duties. The lack of emphasis on training could negatively impact APCs’ ability to monitor delinquencies and promptly detect and prevent potentially fraudulent and abusive activities. As in our Army and Navy work, we determined that most Air Force APC duties were usually given to military personnel. As a result, APC positions usually have high turnover rates which, in many cases, have resulted in less effective performance of APC duties, such as monitoring cardholder travel card activity. For example, at Nellis AFB, the average length of assignment for APCs was approximately 12 months, and at Travis AFB assignments for military APCs were generally from 12 to 15 months. In addition, a Pacific Air Force official reported that during a recent 3-month period, one base experienced turnover in 18 of its 30 APC positions. In contrast, at Hill AFB, where most of the APCs were civilians, the average term for civilian APCs was approximately 20 months. Further, we found that Air Force APC duties at the locations we audited were “other duties as assigned.” The primary duties for certain APCs that we interviewed included data systems management and aircraft maintenance. As prescribed by the DOD Financial Management Regulation, APCs “are responsible for the day-to-day operations of the DOD Travel Card Program.” Volume 9, Chapter 3 of the DOD Financial Management Regulation provides that APCs are responsible for a variety of key duties, including establishing and canceling cardholder accounts, tracking cardholder transfers and terminations, monitoring and taking appropriate actions with respect to account delinquencies, interacting with the bank, and fielding questions about the program from both cardholders and supervisors. APCs are also required to notify commanders and supervisors of all travel card misuse so they can take appropriate actions. Several APCs that we interviewed told us they did not receive training on the full range of their APC duties until at least six months after they were assigned APC responsibilities. The APCs also told us they were not trained in using EAGLS until six months or more after they were assigned APC responsibilities. In addition to the part-time nature of APC duties, the number of travel cardholders assigned to APCs can result in excessive span of control, which impacts an APC’s ability to effectively perform monitoring and oversight. If the span of control is excessive, APCs may not be able to provide the necessary oversight to prevent the misuse of the travel cards. Table 4 shows the average span of control and incidences of APCs with a span of control greater than 100 cardholders. As shown in table 4, average APC span of control ratios varied at our case study locations. We also found that a high percentage of APCs had a span of control that exceeded Bank of America guidelines of 100 cardholders per APC. While we did not evaluate the guidance provided by Bank of America, we believe that one APC cannot effectively carry out all necessary management and oversight responsibilities if he or she, even working full- time, has responsibility for hundreds of cardholders. Thousands of Bank of America and DOD employees had access to Bank of America’s travel card transaction data system, known as EAGLS. Computer system access controls are intended to permit authorized users to access the system to perform their assigned duties and preclude unauthorized persons from gaining access to sensitive information. Access to EAGLS is intended to be limited to authorized users to meet their information needs and organizational responsibilities. Authorized EAGLS users include both customers (APCs requiring access to travel data for cardholders under their purview and individual travelers requiring access to their own travel transaction histories) and Bank of America employees who may be granted one of five different levels of access depending on their assigned duties. The highest level of Bank of America employee access to EAGLS is the “super user” level. According to Bank of America security officials, this level of access—which provides users the ability to add, delete, or modify anything in the system, including creating accounts and editing transaction data in the system—should be granted to as few individuals as possible. We found that 1,127 Bank of America employees had some level of access to the EAGLS system, including 285 with super user level access. After we brought this matter to the attention of Bank of America security officials, they reviewed employee access and deactivated access for 655 employees that they determined should not have had any level of access. Further, Bank of America has since initiated periodic reviews to ensure that it maintains appropriate levels of employee access. In addition, DOD employees retained APC access to EAGLS after relinquishing their APC duties or after they may have been transferred or terminated. In a 2000 survey of 4,952 individuals with APC-level access to EAGLS, DOD found that approximately 10 percent could not be located and may have been transferred or terminated or no longer had APC responsibilities. Because of concern that many of these accounts should be deactivated, Bank of America has begun a review to determine if DOD employees with APC-level access no longer have APC responsibilities or have left the service. Of the four key control activities associated with the fiscal year 2001 travel payment process that we tested, we found breakdowns associated with a lack of documentation to support the accuracy of travel reimbursements at all three locations and significant breakdowns in controls at two locations related to requirement for employees to submit vouchers within 5 days of completing travel. On a positive note, we found that travel vouchers were almost always paid within 30 days of submission. As a result, we ruled out late payment of travel vouchers as a contributing factor to travel card delinquencies at the three Air Force locations we audited. Our test results also showed that most travel charges were supported by approved travel orders, indicating minimal personal use—2 percent or less—of the travel card. This is considerably lower than the Army sites we audited, where we estimated that personal charges were as high as 45 percent at one location. It is also significantly lower than the Navy sites we audited, where we estimated that personal charges were as high as 26 percent at one location. However, as discussed later in this report, our overall Air Force data mining found several instances of personal use of the government travel card. Table 5 below shows the results of our statistical sampling tests. Appendix II includes the specific criteria we used to conclude on the effectiveness of these controls. We found a lack of required receipts for hotel and rental car costs in the voucher packages associated with a number of transactions in our sample, indicating that these expenses should not have been reimbursed to the employees. For the three units we audited, Air Force Financial Services Offices were responsible for processing vouchers to ensure that only authorized, properly supported travel charges were reimbursed and that the expenses claimed were accurately calculated. In our samples, we found that most errors were in the following categories. Missing receipts – At all three case study locations, we found the majority of errors related to instances in which voucher packages did not include all required receipts to support claims, based on DOD regulations. For example, a Nellis AFB cardholder was paid for over $700 in lodging costs on a voucher for which required receipts were not attached to the copy of the travel voucher we reviewed. The Nellis AFB Comptroller told us that he believed the receipts were most likely lost between the processing of the voucher at Nellis AFB and the filing of the voucher at the Defense Finance and Accounting Service (DFAS) in Denver. DFAS Denver officials stated that all of the receipts in the voucher package were copied for our review. We were unable to determine whether the missing receipts may have resulted from poor record retention by DFAS Denver or erroneous payments of expenses without required receipts. In either case, the process for obtaining and retaining required receipts was inadequate. Errors in amounts paid – We found instances at all three case study locations in which Financial Services Office personnel used incorrect per diem rates for lodging and meals and incidental expenses to calculate the reimbursement amount, resulting in overpayments to the traveler. Two of the case study sites we audited—Travis AFB and Hill AFB—had ineffective controls for ensuring that vouchers were submitted in a timely manner. DOD policy requires the traveler to submit a travel voucher within 5 days of return from travel. The failure rates we identified involved late submission of vouchers ranging from 8 to 87 days. Late submission of a travel voucher increases the likelihood that travel card bills could become due before the employee receives a reimbursement for travel expenses. Some of the transactions in our statistical sample could not be evaluated for key control attributes due to data management problems, which represent additional control weaknesses. These weaknesses included data entry errors, such as incorrect social security numbers, and organizational coding problems related to “orphan” accounts—accounts that fell into limbo because transferring units did not deactivate travel card accounts when cardholders transferred to new Air Force units and the cardholders did not check in with the gaining unit APCs to ensure that their travel card accounts were coded to their new unit organization codes. When the account of a transferring cardholder falls into this limbo status, the losing unit continues to receive reports on the account status, but has no control over the cardholder, and the gaining unit’s reports contain no information on the cardholder’s account status. Based on our Nellis AFB statistical testing, we estimated that approximately 2 percent of the fiscal year 2001 transactions were affected by data entry problems and another 4 percent were orphan accounts. We estimated that approximately 1 percent of the Hill AFB transactions and 5 percent of the Travis AFB transactions were associated with orphaned accounts. Our testing did not identify any data entry problems at either Hill AFB or Travis AFB. Our limited review of selected travel system controls at the three case study locations found problems in key systems controls, including access controls, segregation of duties, and transaction histories. Travel vouchers that we examined at the three test locations were processed through the Integrated Automated Travel System (IATS), DOD’s primary travel voucher processing system. Air Force Audit Agency’s February 2002 report on IATS controls identified similar problems at 10 other Air Force locations. Because the IATS performs all processing functions from initiating travel account records through disbursing travel pay, it is critical that system controls are in place to protect against fraudulent payments. Access controls for computer systems must be designed to provide protection against unauthorized access to computer resources. One form of access controls is the use of password cracker programs to test the effectiveness of passwords currently in use. These programs were not being used at the three sites, therefore making passwords vulnerable. Another control, required by Air Force Manual 33-223, Identification and Authentication, is that individual passwords are to be revised every 90 days. However, we found that this requirement was not implemented at one of our three case study locations, and supervisors at Nellis AFB did not follow up to determine if password change instructions were followed. We also found a lack of appropriate segregation of duties resulting in access to incompatible duties in IATS at all three of our test locations. Users should have access only to data and system functions required to accomplish their stated responsibilities and they should not have the ability to perform duties incompatible with their assigned responsibilities. We found that IATS users at all three case study locations had conflicting levels of access and, as a result, were able to not only create travel vouchers, but also to update and audit the same records. For example, our review of access privileges at Hill AFB found that assigned privileges for four users afforded them the ability to perform duties such as creating, updating, and auditing travel vouchers. After we called this problem to the attention of the IATS manager, he immediately revised user access levels to ensure that auditors could not also create and update travel voucher information. According to the Air Force Audit Agency report issued in February 2002, this problem is attributable in some measure to an inherent weakness in the software design. Although IATS contains various levels of privileges that can be assigned to individual users, the software design does not effectively limit access to preclude the assignment of incompatible access privileges. In addition, we found that travel voucher data in IATS did not include transaction histories or audit trails. This problem also was identified by the Air Force Audit Agency as a systemic problem. Because IATS software design does not provide the capability to track changes, it is impossible to obtain transaction histories to determine whether changes were made, or who may have made changes, to a particular voucher. This makes the system vulnerable to individuals who could use inappropriate IATS access to create a fictitious travel voucher, process a payment, and subsequently delete the travel record. According to the Air Force Audit Agency report, this problem is being addressed in the design of WINIATS. WINIATS, a Windows-based software application--is targeted to replace IATS in June 2003. Our work identified numerous instances of potentially fraudulent and abusive activity associated with the Air Force’s travel card program during fiscal year 2001 and the first 6 months of fiscal year 2002, similar to the types of cases we found in our Army and Navy work. For purposes of this report, we characterized as potentially fraudulent those cases where cardholders might have committed bank fraud by writing three or more NSF checks or by writing checks on closed accounts to pay their Bank of America bills. We considered abusive travel card activity to include (1) personal use of the cards—any use other than for official government travel—regardless of whether the cardholders paid the bills and (2) cases in which cardholders were reimbursed for official travel and then did not pay Bank of America and thus benefited personally. In addition, some of the travel card activity that we categorized as abusive may be fraudulent if it can be established that the cardholder violated any element of federal or state criminal codes. Failure to implement controls to reasonably prevent such transactions can increase the Air Force’s vulnerability to additional delinquencies and charge-offs. During the 18-month period covering fiscal year 2001 and the first half of fiscal year 2002, over 6,300 individuals wrote nonsufficient fund (NSF) checks, or “bounced checks,” to Bank of America as payment for their travel card bills, including over 400 individuals who wrote three or more NSF checks— potentially fraudulent acts. Potentially fraudulent NSF cases identified in our work include one individual who had charged over $13,000 to the travel card account and wrote seven NSF checks to Bank of America. The Air Force court-martialed the individual and imposed a 90- day confinement. Table 6 includes details on 10 individuals who committed potentially fraudulent acts by writing three or more NSF checks to pay their travel card accounts. Of the ten cardholders included in table 6, six had significant credit problems prior to card issuance, such as charged-off credit card accounts and automobile loans, bankruptcies, and referrals to collection agencies for unpaid bills. The following provides detailed information on some of these cases. Cardholder #1 was a reservist technical sergeant (E-6) who served one weekend each month. Bank of America records showed that the travel card account was opened on December 22, 1999, and that the individual subsequently wrote three NSF checks totaling $3,214 in payment of his travel card bills. In addition, the individual forged a check in the amount of $260. The individual’s account was closed on January 9, 2002, and an unpaid balance of $6,666 was charged off. The individual’s credit report showed that he had credit problems prior to issuance of the government travel card, including repossession of an automobile and a charged-off account. Bank representatives had numerous conversations with the individual about his account. We found that the individual’s travel card account was included on monthly delinquency reports. Bank of America ultimately charged off the travel card account. The individual was discharged from the Air Force under “Other Than Honorable Conditions” for failure to pay his military travel card bills on time and using his travel card for unauthorized purposes. Cardholder #2 was an airman (E-3) at Tinker AFB, Oklahoma. Bank of America records showed that the individual’s account was opened on August 25, 2000, and that the individual subsequently wrote seven NSF checks totaling $23,137 in payment of her travel card bills. The individual submitted NSF checks, which made the account appear to have available credit—a practice known as “boosting”— thus enabling the individual to make cash withdrawals and additional purchases. Bank of America records also showed that bank representatives had numerous conversations with the individual about her travel card debt. The individual’s account was placed in the salary-offset program on March 19, 2001, with monthly payments of $169. The travel card account was closed on July 18, 2002, and an unpaid balance of $13,908 was charged off. The individual’s credit report showed that the individual did not have credit problems prior to the issuance of the travel card. Bank of America notified the squadron about the NSF checks issued in payment of the individual’s travel card account. A subsequent Air Force investigation identified numerous abuses of the travel card, including multiple uses of the card in 1 day for personal ATM withdrawals, and 187 other instances of misuse totaling approximately $13,700, including personal purchases at vendors such as Victoria’s Secret. These findings resulted in the individual being court-martialed, fined $5,000, and initially sentenced to confinement on the base for about 135 days; however, the base commander reduced the sentence to less than 90 days due to the cardholder’s pregnancy. Cardholder #3 was a technical sergeant (E-6) stationed at Wright- Patterson AFB, Ohio, and was the APC for his unit. Bank of America records showed that the individual’s account was opened on October 10, 1998, and that the cardholder subsequently wrote three NSF checks totaling $6,235. The individual’s travel card account was closed on May 3, 2002, and an unpaid balance of $7,679 was charged off. The bank’s customer contact log indicates that bank representatives had numerous conversations with the individual about the delinquent account. The individual’s credit report showed significant credit problems prior to the individual receiving the travel card. Bank of America notified the squadron that the individual had submitted several NSF checks to Bank of America. According to an Air Force official, the problems reported by the bank were especially disturbing because the individual was a trusted combat veteran with many years of service, who also functioned as the squadron’s APC. An Air Force investigation of the individual’s travel card abuses revealed that the individual (1) made approximately $6,000 in personal, nonauthorized charges, (2) submitted a $4,500 NSF check to the bank to boost the amount of available credit on his account to permit additional cash advances, and (3) unrelated to his travel card abuses, the individual also stole checks in the amount of $7,500 from the U.S. mail. The individual was court-martialed for travel card abuse and theft of U.S. mail and sentenced to 1 year in jail, reduced in pay grade to E-1, and discharged from the military for “financial difficulties.” Cardholder #4 was an airman (E-3) reservist assigned to March AFB, California, who was also a full-time DOD employee (GS-9) in a position involving similar work. Our analysis of Bank of America records showed that the individual obtained two travel card accounts during two different periods. The individual issued NSF checks and other checks to Bank of America on closed accounts in payment of both travel card accounts. The first account, which was opened in January 2000, was closed in February 2001 with an unpaid balance of $4,771 that was subsequently charged off. Air Force officials told us that the individual obtained the second account in October 2001 by having a different superior officer, who was unaware of the previous travel card account, sign the application for the new card. The individual fraudulently used a relative’s social security number to apply for the second travel card account. In payment of his second travel card account, the individual wrote seven checks to Bank of America, consisting of four NSF checks totaling $7,131, on an open bank account and three checks totaling $19,225 on a closed bank account. The cardholder used NSF checks to make large payments, which enabled him to boost his available balance and permit cash withdrawals from the account. An Air Force official stated he was unaware of the problem because the NSF checks masked the delinquency problem. The individual’s second travel card account was closed on June 3, 2002, and an unpaid balance of $12,665 was charged off. Bank of America’s customer contact log indicates that its representatives had numerous conversations with the individual about this account. The cardholder resigned his civilian DOD position and was charged with (1) identity theft related to the use of his relative’s social security number, (2) being absent without leave, (3) failure to participate in monthly training, and (4) financial irresponsibility related to personal use of the government card when not on military orders. The individual was in the process of being discharged from his military E-3 reservist position in October 2002. The individual’s credit report showed he had several credit problems, including bankruptcies and a charge-off prior to his receiving a government travel card. Cardholder #5 was a Virginia state employee assigned to the Air National Guard in Richmond, Virginia. Bank of America records showed that the individual’s account was opened on March 18, 1999. The individual wrote four NSF checks totaling $2,818 and stopped payment on two checks totaling $3,230 to Bank of America. The individual’s travel card account was closed on November 26, 2001, and an unpaid balance of $2,127 was charged off. The cardholder paid off the account on June 17, 2002. Bank of America records indicate that bank representatives had numerous conversations with the cardholder about this account. The individual’s credit report did not show any significant credit problems prior to issuance of the card. The current APC, who assumed that role in July 2001, determined the individual was delinquent on his government travel card account when he reviewed Bank of America delinquency reports. The APC referred the matter to the individual’s unit commander who subsequently counseled the individual on “multiple” occasions regarding the card’s use and delinquency. The APC told us that because the individual was a state employee and not a member of the Air National Guard, the individual was not eligible for the Air Force travel card and should not have been granted a card. We also found numerous examples of Air Force personnel misusing and abusing their government travel cards by making transactions that were clearly not for the purpose of government travel, similar to those we reported in our Army and Navy reports. As discussed further in appendix II, we used data mining procedures to identify transactions that we believed to be potentially fraudulent or abusive based upon the nature, amount, merchant, and other identifying characteristics of the transaction. As a result of these procedures, we found instances in which cardholders abused their travel cards by purchasing a wide variety of personal goods or services that were unrelated to official government travel. As shown in table 7, we were able to determine that during an 18-month period, Air Force cardholders charged approximately $31,000 to purchase admission to entertainment events, such as NFL football games and a Janet Jackson concert. We also identified travel card transactions totaling approximately $14,000 for gambling; $31,000 for cruise packages; and $32,000 coded as purchases at gentlemen’s clubs, which provide adult entertainment. The examples shown in table 7 include both instances where the cardholders paid their bills and where they did not. Our investigative work showed that gentlemen’s clubs were sometimes used to convert the travel card to cash by supplying cardholders with actual cash or “club cash” for a 10 percent fee. To illustrate, an Air Force employee that charged $440 to their government travel card at one of these clubs, would receive $400 in cash. Such charges are processed by the establishment’s merchant bank, and authorized by Bank of America, in part because the merchant category code (MCC),which identifies the nature of the transactions and is to be used by Bank of America to block improper purchases, are circumvented when the establishments reported the charges as restaurant, dining, or bar charges. Subsequently, the club would receive payment for a $440 restaurant charge. Examples of Travel Card Abuse We found cases where individuals used their travel cards for both official and personal reasons, but failed to pay their accounts, thereby resulting in accounts that were charged off and/or included in salary offset and fixed payment plans. Table 8 provides examples of those cases. The following examples include details of cases summarized in table 8. Cardholder #1 is a staff sergeant (E-5) in the Idaho Air National Guard who is employed full-time as a juvenile counselor at a county correctional facility. The cardholder told our investigators that from December 22, 2000, to February 19, 2001, his wife used his government travel card without his knowledge or consent. Bank of America records showed that transactions for the above period totaled over $13,000, of which over $10,000 was for on-line gambling charges and another $3,000 was for ATM withdrawals. There were also several credits to the cardholder’s account totaling over $5,000 from his wife’s gambling winnings. The cardholder’s wife admitted to a gambling addiction and to using their personal bank debit card and her husband’s government travel card to fund her addiction. Upon discovering his wife’s abusive use of his government travel card, the cardholder immediately briefed his commanding officer, who informed the APC, and the account was closed. The cardholder also contacted Bank of America to work out a payment plan for the debt but no agreement could be reached. As a result of his inability to pay the debt incurred by his wife, the cardholder filed for Chapter 7 bankruptcy. On September 3, 2001, Bank of America charged off an unpaid balance of $7,258 on the cardholder’s travel card account. To date, no criminal charges have been initiated against the cardholder’s now ex-wife. In researching this case, we noted that although DOD has requested that Bank of America block certain merchant category codes to help prevent improper travel card transactions, such as transactions for on- line gambling at www.PROCCY, merchants are able to circumvent such restrictions by assigning permissible merchant codes to otherwise improper transactions. For example, in this case, to mask gambling activity, the on-line gambling establishments with whom the cardholder’s wife dealt used the merchant category codes for “Miscellaneous and Specialty Retail Stores” and “Professional Services—Not Elsewhere Classified” instead of the merchant category code for “Betting—Including Lottery, Gaming Chips, Track Wagers.” However, these establishments credited the wife’s winnings to the cardholder’s account using the merchant category code for “Betting— Including Lottery, Gaming Chips, Track Wagers.” Active monitoring by the APC of ongoing travel card activity would have helped detect the problem transactions sooner. Cardholder #2 was a highly skilled technical sergeant (E-6) at Travis AFB, California, who held a secret clearance and worked on C-5 aircraft, large cargo aircraft designed for airlifting weapons and supplies. Our discussions with base officials and our review of the cardholder’s personnel file and credit report revealed that the cardholder had several credit card delinquencies prior to issuance of the travel card. In March 1998, prior to being assigned to Travis AFB, the cardholder had received an Article 15 for wrongfully using his American Express government travel card for personal gain and blaming the misuse of the travel card on another family member. In March 2001, when the individual transferred to Travis AFB, his new APC noted that the individual’s travel card account had a past due balance of $2,257. The APC reported this information to the cardholder’s unit commander. At that time, the account was suspended and Bank of America closed and canceled the cardholder’s account a week later. However, Travis AFB officials told us that they asked Bank of America to keep the individual’s travel card account open so that he could travel where necessary to make repairs to downed C-5 aircraft. The officials told us that the cardholder was one of a few experts who could supervise repairs on the C-5 aircraft. According to the officials, when problems arose with the aircraft, repairs had to be made immediately to get the plane back in the air. On April 16, 2001, the unit commander counseled the cardholder and gave him a letter of reprimand for nonpayment of his travel card bill. On June 25, 2001, the cardholder received another Article 15 for failure to pay his “Military Star Account” with the base Army and Air Force Exchange Service (AAFES) store. Both Article 15s and the letter of reprimand contained statements indicating that this behavior would not be tolerated. It is apparent that this statement on the documents did not deter the individual from being delinquent, nor did the officials abide by these statements. During the fall of 2001, Air Force investigators were notified that personal protective gear, including body armor and biochemical and biological protective masks, was missing from C-5s arriving in Afghanistan. The cardholder came under suspicion as one of a few individuals with access to C-5 aircraft. During the ensuing investigation involving the individual, his security badge was revoked and he had to be escorted to and from his worksite. Shortly thereafter, Air Force investigators videotaped the individual selling military protective gear in a town near the base, and the individual was arrested and charged with theft and sale of government property. Investigators determined that the individual was addicted to gambling and had used his government travel card reimbursements and the proceeds from the sale of stolen government property to finance his gambling habit. In January 2002, the individual was court-martialed, and in March 2002, he was convicted of theft and sale of $50,000 in government property and was dishonorably discharged. He was sentenced to a 5-year jail term. Air Force investigative and legal officials told us that the individual’s failure to pay his travel card debt was considered in the sentencing decision. If Travis AFB officials had acted sooner to cancel the technical sergeant’s travel card account, revoke his security clearance, and discharge him from the service, they may have prevented the theft of critical protective gear needed by troops deployed in Afghanistan. Further, we found examples where individuals used their government travel cards for personal use on purchases of items, such as computers, entertainment, college tuition, and jewelry, but kept their accounts current by paying their travel card bills in a timely manner. We considered these purchases to be abusive travel card activity because the travel card may only be used for official government travel expenses. Personal use of the travel card may increase the risk of charge-offs, which are costly to the government and the taxpayer. In addition, instances of personal use are indicative of internal control breakdowns, such as the failure of the APCs to monitor travel card activities. Table 9 provides details on 10 cases where the cardholders made personal purchases but paid their accounts. The instances illustrated in this report clearly represent abusive use of the government travel card. Air Force personnel are informed that these types of transactions are not permitted. All Air Force cardholders are required to sign a statement of understanding that the card is to be used only for authorized official government travel expenses. Air Force policy provides commanders with a wide variety of disciplinary options for addressing misconduct by service members. The means of discipline include counseling, oral and written reprimands, creating an unfavorable information file, issuing Article 15s, and court-martial. The policy leaves the means of discipline and the actual punishment to the discretion of the individual commander based upon the facts of each case. However, for the cases involving 58 cardholders whose accounts involved NSF checks, charge-offs, or salary offsets, we found documented evidence of disciplinary actions in only 19 cases. Our analysis of cases where travel card accounts had been charged off, were in salary offset, or involved NSF checks showed that when the Air Force took disciplinary actions, those actions ranged from counseling to court-martial and discharge from the service. In certain cases where documentation of disciplinary actions was not available, Air Force officials told us that verbal counseling had been provided, but was not documented. In other cases where documentation was not available, Air Force officials claimed that disciplinary actions had been taken, but records had not been retained because the individuals had transferred or left the service. At Hill AFB, most of the cases we reviewed involved civilians. Air Force Instruction 36-704, Discipline and Adverse Actions, provides guidance on disciplinary action for civilians who fail to honor valid debts or legal obligations. However, the guidelines limit disciplinary action to reprimands, even after the third offense. In addition, we found that 32 of the 58 most severe abusers of the travel card still had secret or top secret clearances in August 2002. According to Air Force Instruction 31-501, Personnel Security Program Management, military units are responsible for maintaining unfavorable information files on individuals, and are supposed to notify the central security facility of instances of financial irresponsibility or other behavioral problems that may affect an individual’s security clearance. However, we determined that the Air Force does not have consistent procedures in place to link travel card account delinquencies or charge-off status to an individual’s security clearance. Some of the Air Force personnel holding security clearances who have had difficulty paying their travel card bills may present security risks to the Air Force. We have referred the names of these individuals to the Air Force Central Adjudication Facility for appropriate evaluation. Linking disciplinary actions and security clearances to misuse of travel cards was recently addressed by the fiscal year 2003 Defense Appropriations Act. In addition to requiring the Secretary of Defense to establish guidance and procedures for disciplinary actions, section 8149(c) of the act states that such actions may include (1) review of the security clearance of the cardholders in cases of misuse of the government travel card, and (2) modification or revocation of the security clearance in light of such review. Since March 2002, DOD and the Air Force have taken additional actions to reduce delinquencies in the travel card program. For example, the DOD Comptroller established a Charge Card Task Force to address management issues related to DOD’s purchase and travel card programs. The task force issued its final report on June 27, 2002, which called for additional actions to improve the controls over the travel card program. However, to date, many of the actions that DOD has taken primarily address the symptoms rather than the underlying causes of the problems with the program. Specifically, actions to date have focused on dealing with accounts that are seriously delinquent, which are “back end” or detective controls rather than preventive controls. On September 27, 2002, the Air Force Assistant Secretary for Financial Management (Comptroller) issued a memorandum emphasizing travel card management tools and policy updates to assist local commanders in the detection of travel card misuse. Specifically, the memorandum (1) directed that travel cards that have had no activity within the last 12 months be canceled, (2) emphasized that program coordinators should use new EAGLS exception reports to help identify suspicious card activity that may indicate abuse or potential delinquency problems before they appear on delinquency reports, and (3) noted that the Air Force is conducting a thorough review of MCCs to ensure that cards cannot be used at establishments that are not travel related. In addition, Air Force officials told us they also are considering contracting for data mining services to support their oversight of the travel card program. The Congress has recently addressed several of the key issues we identified in our Army and Navy work. Section 8149(b) of the Department of Defense Appropriations Act, 2003, requires creditworthiness evaluations of all potential cardholders and guidelines and procedures for disciplining individuals for fraudulent and abusive use of government travel cards. Further, section 1008(a) and (b) of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 provides authority for the Secretary of Defense to require (1) use of the split disbursement process, where any part of a DOD employee’s or service member’s travel reimbursement is paid directly to the travel card-issuing bank, and (2) deductions of prescribed amounts from salary and retirement pay of DOD employees or service members who have delinquent travel card balances and payment of those amounts to the travel card-issuing bank. The intent of the travel card program was to improve convenience for the traveler and to reduce the government’s costs of administering travel. Since implementation of the travel card as part of its travel program, the Air Force changed its management strategies to oversee the use of government travel cards. What once was a weak internal control environment in the travel program has been strengthened, resulting in a decrease in delinquency rates and charge-offs of bad debts. Despite these efforts, the Air Force continues to experience potentially fraudulent and abusive travel card activity. Air Force and DOD actions addressed many areas in the program needing improvements. However, DOD and the Air Force will need to implement further improvements to more effectively prevent potentially fraudulent and abusive activity and further reduce severe credit problems associated with the travel card. A focus on additional “front-end” or preventive controls will be paramount. In this regard, section 8149(c) of the fiscal year 2003 DOD Appropriations Act requires creditworthiness evaluations of all potential cardholders and guidelines and procedures for disciplining individuals for fraudulent and abusive use of government charge cards. To strengthen the overall control environment and improve internal control for the Air Force’s travel card program, we recommend that the Secretary of the Air Force take the following actions. We also recommend that the Under Secretary of Defense (Comptroller) assess the following recommendations and, where applicable, incorporate them into or supplement the DOD Charge Card Task Force recommendations to improve travel card policies and procedures throughout DOD. We recommend that the Secretary of the Air Force establish specific policies and procedures governing the issuance of individual travel cards to military and civilian employees, including the following. In accordance with recently enacted legislation, provide individuals who have no prior credit histories “restricted” travel cards with low credit and ATM limits. Develop procedures to periodically evaluate the frequency of cardholders’ travel card use and close accounts of infrequent travelers in order to minimize exposure to fraud and abuse. In conjunction with the periodic reviews, cancel accounts for current infrequent travelers as noted in the Charge Card Task Force report. Evaluate the feasibility of activating and deactivating travel cards, regardless of whether they are standard or restricted cards, so that they are available for use only during the period authorized by the cardholders’ travel orders. At a minimum, this policy should focus on controlling travel card use by “high-risk” enlisted military personnel in the E-1 to E-6 grades. Develop comprehensive, consistent Air Force-wide initial training and periodic refresher training for travel cardholders that focuses on the purpose of the program and appropriate uses of the card. The training should emphasize the prohibitions on personal use of the card, including gambling, personal travel, and adult entertainment. Such training should also address the policies and procedures of the travel order, voucher, and payment processes. For entry-level personnel, the training should also include information on basic personal financial management techniques to help avoid financial problems that could affect an individual’s ability to pay his or her travel card bill. We recommend that the Secretary of the Air Force establish the following specific policies and procedures to strengthen controls to address improper use of the travel card. Establish guidance regarding the knowledge, skills, and abilities required to carry out APC responsibilities effectively. Establish guidance on APC span of control responsibilities so that such responsibilities are properly aligned with time available to ensure effective performance. Determine whether certain APC positions should be staffed on a full-time basis rather than as collateral duties. Establish Air Force-wide procedures to provide assurance that APCs receive training on their APC responsibilities, including requirements for monitoring cardholders’ travel card use. The training should include how to use EAGLS transaction reports and other available data to monitor cardholder use of the travel card—for example, reviewing account transactional histories to ascertain whether transactions are incurred during periods of authorized travel and appear to be appropriate travel expenses and are from approved MCCs. Require agency program coordinators to review EAGLS reports to identify cardholders who have written NSF checks for payment on their account balances and refer this data to the employee’s immediate supervisor. Review, in conjunction with Bank of America, APC-level access to EAGLS to limit such access to only those individuals with current APC duties. Establish Air Force procedures detailing how APCs should carry out their responsibility to monitor travel card use for all cardholders assigned to them. Include in the procedures the development of a data mining program that would enable APCs to easily identify potentially inappropriate transactions for further review. Enforce controls for canceling accounts after employees transfer to other units to avoid “orphan” accounts that are not subject to effective management oversight. Require cognizant APCs to retain records documenting any cardholder’s fraudulent or abusive use of the travel card and require that this information be provided to the gaining APC when the cardholder is transferred. Review records of individuals whose accounts had been charged off or placed in salary offset to determine whether they have been referred to Air Force Central Adjudication Facility for a security review. Strengthen procedures regarding employees leaving the service to assure that all travel card accounts are deactivated or closed and that repayment of any outstanding debts is arranged. Perform a review to determine that these procedures are implemented effectively and that accounts of departed cardholders are deactivated or closed in a timely manner. Develop procedures to identify active cards of departed cardholders, including comparing cardholder and payroll data. In oral comments on a draft of this report, DOD and the Air Force concurred on all 16 of our recommendations and stated that it had taken actions or had actions underway to address many of them. For example, with respect to actions completed, DOD stated that the Air Force recently implemented procedures to (1) evaluate the frequency of cardholder travel card use and close travel card accounts that were not used in the past year and (2) work with Bank of America to perform semi-annual reviews of travel card use. With respect to actions underway, (1) the Air Force has started a project to evaluate the feasibility of deactivating travel cards so that they are available for use only during periods of authorized travel and (2) DOD is evaluating travel card training and developing revised policy requirements for APC span of control and travel card management responsibilities. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute this report until 30 days from its date. At that time, we will send copies to interested congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller); the Secretary of the Air Force; the Assistant Secretary of the Air Force for Financial Management (Comptroller); the Director of the Defense Finance and Accounting Service; and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Gregory D. Kutz at (202) 512-9505 or kutzg@gao.gov, John J. Ryan at (202) 512-9587 or ryanj@gao.gov, or Gayle L. Fischer at (202) 512- 9577 or fischerg@gao.gov, if you or your staffs have any questions concerning this report. Major contributors to this report are acknowledged in appendix V. In 1983, the General Services Administration (GSA) awarded a governmentwide master contract with a private company to provide government-sponsored, contractor-issued travel cards to federal employees to be used to pay for costs incurred on official business travel. The intent of the travel card program was to provide increased convenience to the traveler and lower the government’s cost of travel by reducing the need for cash advances to the traveler and the administrative workload associated with processing and reconciling travel advances. The travel card program includes both individually billed accounts—accounts held and paid by individual cardholders—and centrally billed accounts that are used to purchase transportation or are used for the travel expenses of a unit and are paid directly by the government. As of the end of fiscal year 2001, over 2.1 million individually billed travel cards were issued to federal government travelers. These travel cardholders charged $3.6 billion during the same fiscal year. Under the current GSA master contract, the Department of Defense entered into a tailored task order with Bank of America to provide travel card services to DOD and the military services, including the Air Force. Table 10 provides the number of individually billed travel cards outstanding and related dollar amount of travel card charges by DOD and its components in relation to the total federal government. As shown in table 10, DOD accounts for about 1.4 million, or 66 percent, of the total number of the individually billed travel cards issued by the entire federal government, and DOD’s cardholders charged about $2.1 billion, or about 59 percent of the federal government’s travel card charges during fiscal year 2001. Table 10 also shows that the Air Force provided 501,306 individually billed cards to its civilian and military employees as of September 2001. These cardholders charged an estimated $831 million to their travel cards during fiscal year 2001. The Travel and Transportation Reform Act of 1998 (Public Law 105-264) expanded the use of government travel cards by mandating the use of the cards for all official travel unless specifically exempted. The act is intended to reduce the overall cost of travel to the federal government through reduced administrative costs and by taking advantage of rebates from the travel card contractor. The act requires that agencies reimburse cardholders for proper travel claims within 30 days of submission of approved travel vouchers by the cardholders. Further, the act allows, but does not require, agencies to offset a cardholder’s pay for amounts the cardholder owes to the travel card contractor as a result of travel card delinquencies not disputed by the cardholder. The act calls for GSA to issue regulations incorporating the requirements of the act. GSA incorporated the act’s requirements into the Federal Travel Regulation. The Federal Travel Regulation governs travel and transportation and relocation allowances for all federal government employees, including overall policies and procedures governing the use of government travel cards. Agencies are required to follow the requirements of GSA’s Federal Travel Regulation, but can augment these regulations with their own implementing regulations. DOD issued its Financial Management Regulation (FMR), Volume 9, Chapter 3, “Travel Policies and Procedures,” to supplement GSA’s travel regulations. DOD’s Joint Travel Regulations, Volume 1, “Uniformed Service Members,” and Volume 2, “Civilian Personnel,” refer to the FMR as the controlling regulation for DOD’s travel cards. As shown in figure 7, the Air Force’s travel card management program for individually billed travel card accounts encompasses card issuance, travel authorization, cardholders charging goods and services on their travel cards, travel voucher processing and payment, and managing travel card usage and delinquencies. When an Air Force civilian or military employee or the employee’s supervisor determines that he or she will need a travel card, the employee contacts the unit’s travel card agency program coordinator (APC) to complete an individually billed card account application form. As shown in figure 8, the application requires the applicant to provide pertinent information, including full name and social security number, and indicate whether he or she is an active, reserve, or a civilian employee of the Air Force. The applicant is also required to initial a statement on the application acknowledging that he or she has read and understands the terms of the travel card agreement and agrees to be bound by these terms, including a provision acknowledging that the card will be used only for official travel. The APC is required to complete the portion of the member’s application concerning who will be responsible for managing the use and delinquencies related to the card. Bank of America is required to issue a travel card to all applicants for whom it receives completed applications signed by the applicants, the applicants’ supervisors, and the APCs. Bank of America issues travel cards with either a standard or restricted credit limit. If an employee has little or no credit history or poor credit based on a credit check performed by Bank of America, Bank of America may suggest to the service that the applicant receive a restricted credit limit of $2,000 instead of the standard credit limit of $5,000. However, as shown in figure 8, the application allows the employee to withhold permission for Bank of America to obtain credit reports. If this option is selected, Bank of America automatically issues a restricted credit limit card to the applicant. Before cardholders leave the Air Force, they are required to contact their APCs and notify them of their planned departure. Based on this notification from the cardholders, the APCs are to deactivate or terminate the cardholders’ accounts. When a cardholder is required to travel for official government purposes, he or she is issued a travel order authorizing travel. The travel order is required to specify the timing and purpose of the travel authorized. For example, the travel order is to authorize the mode of transportation, the duration and points of the travel, and the amounts of per diem and any cash advances. Further, the Air Force can limit the amount of authorized reimbursement to military members based on the availability of lodging and dining facilities at military installations. For authorized travel, travelers must use their cards to pay for allowable expenses such as hotels, rental cars, and airfare. The travel card can also be used for meals and incidental expenses or cash can be obtained from an automatic teller machine. When the travel card is submitted to a merchant, the merchant will process the charge through its banking institution, which in turn charges Bank of America. At the end of each banking cycle (once each month), Bank of America prepares a billing statement that is mailed to the cardholder for the amounts charged to the card. The statement also reflects all payments and credits made to the cardholder’s account. Bank of America requires that the cardholder make payment on the account in full within 30 days of the statement closing date. If the cardholder does not pay his or her monthly billing statement in full, and does not dispute the charges within 60 days of the statement closing date, the account is considered delinquent. Within 5 duty days of return from travel, the cardholder is required to submit a travel voucher claiming legitimate and allowable expenses incurred while on travel. Further, the standard is for the cardholder to submit an interim voucher every 30 days for extended travel of more than 45 days. The amount that cardholders are reimbursed for their meals and incidental expenses and hotels is limited by geographical rates established by GSA. Upon submission of a proper voucher by the cardholder, DOD has 30 days in which to make reimbursement without incurring late payment fees. Cardholders are required to submit their travel vouchers to their supervisors or other designated approving officials who must review the vouchers and approve them for payment. If the review finds an omission or error in a voucher or its required supporting documentation, the approving official must inform the traveler of the error or omission. After the supervisor approves a cardholder’s travel voucher package for payment, the voucher-processing unit at the location to which the cardholder is assigned processes it. The voucher-processing unit enters travel information from the approved voucher into DOD’s Integrated Automated Travel System (IATS). IATS calculates the amount of per diem authorized in the travel order and voucher and the amount of mileage, if any, claimed by the cardholder. In addition, any other expenses claimed and approved are entered into IATS. If problems with the voucher are found during the initial entry of the information into IATS or during audits after the initial entry, the voucher can be rejected and returned to the cardholder for correction. Once the vouchers are processed and possibly audited, they are sent to DFAS for payment to the cardholder or to Bank of America and the cardholder, if the cardholder elected to use the split disbursement payment process whereby part of the reimbursement is sent directly to Bank of America. If the payment of the approved proper voucher takes longer than 30 days, DOD is required to pay the cardholder a late payment fee plus an amount equal to the amount Bank of America would have been entitled to charge the cardholder had the cardholder not paid the bill by the due date. In addition to controlling the issuance and credit limits related to the travel card, APCs are also responsible for monitoring the use of and delinquencies related to travel card accounts for which they have been assigned management responsibility. Bank of America’s Web-based Electronic Account Government Ledger System (EAGLS) provides on-line tools that are intended to assist APCs in monitoring travel card activity and related delinquencies. Specifically, APCs can access EAGLS to monitor and extract reports on their cardholders’ travel card transaction activity and related payment histories. Both the Air Force and Bank of America have a role in managing travel card delinquencies under GSA’s master contract. While APCs are responsible for monitoring cardholders’ accounts and for working with cardholders’ supervisors to address any travel card payment delinquencies, Bank of America is required to use EAGLS to notify the designated APCs if any of their cardholders’ accounts are in danger of suspension or cancellation. When Bank of America has not received a required payment on any travel cardholder’s account within 60 days of the billing statement closing date, it is considered delinquent. As summarized in figure 9, there are specific actions required by both the Air Force and Bank of America based on the number of days a cardholder’s account is past due. The following is a more detailed explanation of the required actions by the Air Force and/or Bank of America with respect to delinquent travel card accounts. 45 days past due—Bank of America is to send a letter to the cardholder requesting payment. Bank of America has the option to call the cardholder with a reminder that payment is past due and to advise the cardholder that the account will be suspended if it becomes 60 days past due. 55 days past due—Bank of America is to send the cardholder a presuspension letter warning that Bank of America will suspend the account if it is not paid. If Bank of America suspends a travel card account, the card cannot be used until the account is paid. 60 days past due—The APC is to issue a 60-day delinquency notification memorandum to the cardholder and to the cardholder’s immediate supervisor, informing them that the cardholder’s account has been suspended due to nonpayment. The next day, a suspension letter is to be sent by Bank of America to the cardholder providing notice that the card has been suspended until payment is received. 75 days past due—Bank of America is to assess the account a late fee. The late fee charged by Bank of America was $20 through August 9, 2001. Effective August 10, 2001, Bank of America increased the late fee to $29 under the terms of the contract modification between Bank of America and DOD. Bank of America is allowed to assess an additional late fee every 30 days until the account is made current or charged off. 90 days past due—The APC is to issue a 90-day delinquency notification memorandum to the cardholder, the cardholder’s immediate supervisor, and the company commander (or unit director). The company commander is to initiate an investigation into the delinquency and take appropriate action, at the company commander’s discretion. At the same time, Bank of America is to send a “due process letter” to the cardholder providing notice that the account will be canceled if payment is not received within 30 days unless he or she enters into a payment plan, disputes charge(s) in question, or declares bankruptcy. 120 days past due—The APC is to issue a 120-day delinquency notification memorandum to the cardholder’s commanding officer. At 126 days past due, the account is to be canceled by Bank of America. Beginning in October 2001, once accounts were 120 days past due, Bank of America began sending files to DFAS listing these accounts for salary offset. 150 days past due—The point at which DFAS generally initiates action for salary offset. 180 days past due—Bank of America is to send a “precharge-off” or last call letter to the cardholder informing him or her that Bank of America will charge-off the account and report the cardholder to a credit bureau if payment is not received. A credit bureau is a service that reports the credit history of an individual. Banks and other businesses assess the credit-worthiness of an individual using credit bureau reports. 210 days past due—Bank of America is to charge off the delinquent account and, if the balance is $50 or greater, report it to a credit bureau. Some accounts are pursued for collection by Bank of America’s recovery department; others are sent to attorneys or collection agencies for recovery. The delinquency management process can be suspended when a cardholder’s APC informs Bank of America that the cardholder is on official travel and is unable to submit vouchers and pay his or her account in a timely manner, through no fault of his or her own. Under such circumstances, the APC is to notify the Bank of America that the cardholder is in “mission-critical” status. By activating this status, the Bank of America is precluded from identifying the cardholder’s account as delinquent until 45 days after such time as the APC determines the cardholder is to be removed from mission-critical status. According to Bank of America, approximately 800 to 1,000 cardholders throughout DOD were in this status at any given time throughout fiscal year 2001. Pursuant to a joint request by the Chairman and Ranking Minority Member of the Subcommittee on Government Efficiency, Financial Management and Intergovernmental Relations, House Committee on Government Reform, and the Ranking Minority Member of the Senate Committee on Finance, we audited the controls over the issuance, use, and monitoring of individually billed travel card accounts and associated travel processing and management for the Department of the Air Force. Our assessment covered the reported magnitude and impact of delinquent and charged off Air Force travel card accounts for fiscal year 2001 and the first 6 months of fiscal year 2002, along with an analysis of causes and related corrective actions; an analysis of the universe of Air Force travel card transactions during fiscal year 2001 and the first 6 months of fiscal year 2002 to identify potentially fraudulent and abusive activity related to the travel card; the Air Force overall management control environment and the design of selected Air Force travel program management controls, including controls over (1) travel card issuance, (2) agency program coordinators (APC) capacity to carry out assigned duties, (3) limiting card activation to meet travel needs, (4) transferred and “orphan” accounts, (5) procedures for terminating accounts when cardholders leave military service, and (6) access to Bank of America’s travel card database; and tests of statistical samples of transactions to assess the implementation of key management controls and processes for three Air Force units’ travel card activity including (1) travel order approval, (2) accuracy of travel voucher payments, (3) the timely submission of travel vouchers by travelers to the approving officials, and (4) the timely processing and reimbursement of travel vouchers by the Air Force and DOD. We used as our primary criteria applicable laws and regulations, including the Travel and Transportation Reform Act of 1998 (Public Law 105-264), the General Services Administration’s (GSA) Federal Travel Regulation, and the Department of Defense (DOD) Financial Management Regulation, Volume 9, “Travel Policies and Procedures.” We also used as criteria our Standards for Internal Control in Federal Government and our Guide to Evaluating and Testing Controls Over Sensitive Payments. To assess the management control environment, we applied the fundamental concepts and standards in our internal control standards to the practices followed by management in the six areas reviewed. To assess the magnitude and impact of delinquent and charged-off accounts, we compared the Air Force’s delinquency and charge-off rates to those of other DOD services and federal civilian agencies. We also analyzed the trends in the delinquency and charge-off data from the third quarter of fiscal year 2000 through the first half of fiscal year 2002. In addition, we used data mining to select Air Force units for audit and identify individually billed travel card transactions for further analysis. Our data mining procedures covered the universe of individually billed Air Force travel card activity during fiscal year 2001 and the first six months of fiscal year 2002 and identified transactions that we believed were potentially fraudulent or abusive. However, our work was not designed to identify, and we did not determine, the extent of any potentially fraudulent or abusive activity related to the travel card. In performing our data mining, we obtained and analyzed information on travel card account status and credit history, security clearance, and disciplinary action. To assess the overall control environment for the travel card program at the Department of the Air Force, we obtained an understanding of the travel process, including travel card management and oversight, by interviewing officials from the Office of the Undersecretary of Defense, Comptroller; Department of the Air Force; Defense Finance and Accounting Service (DFAS); Bank of America; and GSA. We reviewed applicable policies, procedures, and program guidance they provided. We visited three Air Force units to “walk through” the travel process, including the management of travel card use and delinquency. Further, we contacted one of the three largest U.S. credit bureaus to obtain credit history data and information on how credit-scoring models are developed and used by the credit industry for credit reporting. At each of the Air Force locations we audited we also used our review of policies and procedures and the results of our “walk-throughs” of travel processes and other observations to assess the effectiveness of controls over segregation of duties among persons responsible for issuing travel orders, preparing travel vouchers, processing and approving travel vouchers, and certifying travel voucher payments. We performed a limited review of access controls for travel voucher processing at our three case study locations. We did not assess electronic signature controls over the electronic data processing of Air Force travel card transactions. We also reviewed computer system access controls for the Electronic Account Government Ledger System (EAGLS)—the system used by Bank of America to maintain DOD travel card data. To determine whether access controls for EAGLS were effective, we interviewed Bank of America officials and observed EAGLS functions and capabilities. To test the implementation of key controls over individually billed Air Force travel card transactions processed through the travel system— including the travel order, travel voucher, and payment processes—we obtained and used the Bank of America database of fiscal year 2001 Air Force travel card transactions to review random samples of transactions at three Air Force locations. Because our objective was to test controls over travel card expenses, we excluded credits and miscellaneous debits (such as fees) from the population of transactions used to select random samples of travel card transactions to review at each of three Air Force units we audited. Each sampled transaction was subsequently weighted in the analysis to account statistically for all charged transactions at each of the three units, including those that were not selected. We did not verify the accuracy of the data in the Air Force travel card database. We selected three Air Force case study locations for testing controls over travel card activity by first selecting three large commands based on the number of travel card accounts, outstanding balances, and delinquencies. The three commands we selected accounted for about 38 percent of the total number of Air Force travel card accounts, 41 percent of the outstanding balance of travel card charges, and about 33 percent of the travel card delinquencies. We selected one installation from each of these commands for detailed testing based on the volume of travel card activity and delinquencies. Table 11 presents the sites selected and the number of fiscal year 2001 transactions at each location. We performed tests on statistical samples of travel card transactions at each of the three case study sites to assess whether the system of internal controls over the transactions was effective, as well as to provide an estimate of the percentage of transactions by unit that were not for official government travel. For each transaction in our statistical sample, we assessed whether (1) there was an approved travel order prior to the trip, (2) the travel voucher payment was accurate, (3) the travel voucher was submitted within 5 days of the completion of travel, and (4) the travel voucher was paid within 30 days of submission of an approved travel voucher. We considered transactions not related to authorized travel to be abuse and incurred for personal purposes. Although we projected the results of our samples of these control attributes, as well as the estimate for personal use—or abuse—related to travel card activity to the population of transactions at the respective case study locations, the results cannot be projected to the population of Air Force transactions or the installations as a whole. Tables 12 through 15 show (1) the results of our tests of key control attributes, (2) the point estimates of the failure rates for the attributes, (3) the two-sided 95 percent confidence intervals for the failure rates for each attribute, (4) our assessments of the effectiveness of the controls, and (5) the relevant lower and upper bounds of a one-sided confidence interval for the failure rate. All percentages in these tables are rounded to the nearest percentage point. We use one-sided confidence bounds to classify the effectiveness of a control activity. If the 1-sided lower bound does not exceed 5 percent, then the control activity is effective. If the 1-sided lower bound exceeds 10 percent, then the control is ineffective. Otherwise, we say that the control is partially effective. Partially effective controls may include those for which there is not enough evidence to assert either effectiveness or ineffectiveness. For example, if we were 95 percent confident that the 1-sided lower bound of a failure rate for a particular control is 3 percent, we would categorize that control activity as “effective” because 3 percent is less than the 5 percent standard. Similarly, if we were 95 percent confident that the 1-sided upper bound of a failure rate for a particular control is 72 percent, we would categorize that control as “ineffective” because 72 percent is greater than the 10 percent standard. Table 12 shows the results of our test of the key control related to the authorization of travel—approved travel orders were prepared prior to dates of travel. Table 13 shows the results of our test for effectiveness of controls in place over the accuracy of travel voucher payments. Our test work included determining whether (1) the travel voucher information was consistent with dates and locations of travel authorized on the related travel order, (2) per diem was paid in the proper amount, and (3) transactions for lodging, air fare, and other expenses over $75 were supported by required receipts. Table 14 shows the results of our tests of key controls related to timely processing of claims for reimbursement of expenses related to government travel—timely submission of the travel voucher by the employee. Table 15 shows the results of our tests of key controls related to timely processing of claims for reimbursement of expenses related to government travel—timely travel voucher approval and payment processing. To determine if cardholders were reimbursed within 30 days, we used the DFAS payment dates. We did not independently validate the accuracy of these reported payment dates. We briefed DOD managers, including DFAS officials in DOD’s Office of the Under Secretary of Defense (Comptroller) and Air Force officials in the office of the Assistant Secretary of the Air Force (Financial Management and Comptroller); and unit commanders; comptrollers; and installation agency program coordinators on the details of our audit, including our findings and their implications. On November 26, 2002, we requested comments on a draft of this report. We received oral comments on December 17, 2002, and have summarized those comments in the “Agency Comments and Our Evaluation” section of this report. We conducted our audit work from January 2002 through mid-November 2002 in accordance with U.S. generally accepted government auditing standards, and we performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Table 16 shows the travel card delinquency rates for Air Force’s major commands (and other Air Force organizational units at a comparable level) that had outstanding balances over $1 million during the 2-year period ending March 31, 2002. Commands with a March 31, 2002, balance outstanding under $1 million have been combined into “all other commands.” The Air Force’s commands and other units are listed in descending order based on their respective delinquency rates as of March 31, 2002. Table 17 shows outstanding balances and delinquency rates by major command listed in descending order of outstanding balances as of March 31, 2002. Tables 18 and 19 show the grade, rank (where relevant), and the associated basic pay rates for 2001 for Air Force’s military and civilian personnel. The basic 2001 pay rates shown exclude other considerations such as locality pay and any allowances for housing or cost of living. Staff making key contributions to this report include: Mario L. Artesiano, Paul S. Begnaud, Bertram J. Berlin, Fannie M. Bivins, Francine M. DelVecchio, Donald H. Fulwider, C. Robin Hodge, Woodward H. Hunt, Jeffrey A. Jacobson, Jr., Jonathan T. Meyer, Sue Piyapongroj, John R. Ryan, Sidney H. Schwartz, Robert A. Sharpe, Bennet E. Severson, and Lisa M. Warde. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
Poor oversight and management of the Department of Defense (DOD) travel card program has led to high delinquency rates costing DOD millions in lost rebates and increased ATM fees. As a result, Congress asked GAO to report on (1) the magnitude, impact, and cause of delinquencies, (2) the types of fraudulent and abusive uses of travel cards, and (3) the effectiveness of internal controls over DOD's travel card program. GAO previously reported on travel card management at the Air Force. Air Force management has reduced travel card delinquencies through greater command attention and the use of travel card audits to identify problems and needed corrective actions. As of March 2002, the Air Force delinquency rate on average was about 5 percentage points lower than the rest of DOD and 1 percentage point higher than the federal civilian agencies. The Air Force's overall delinquency and charge-off problems were primarily associated with lower paid, low- to midlevel enlisted military personnel. Despite these improvements, a weak control environment contributed to significant abuse and potential fraud. For example, many of the problem cases identified were due to ineffective controls over the issuance and cancellation of travel cards and weaknesses in the assignment and training of agency program coordinators. During the period of our review, over 400 Air Force cardholders committed potential bank fraud by writing three or more nonsufficient fund (NSF) checks to Bank of America. Also, many cardholders used their cards for inappropriate purchases, such as cruises and event tickets. A significant relationship also existed between potential travel card fraud, abuse, and delinquencies and individuals with substantial credit history problems. Some cardholders had personal accounts placed in collection while others had filed bankruptcies prior to receiving government travel cards. Also, the issuance of the travel cards to virtually everyone who applied for them compounded these problems. GAO found documented evidence of disciplinary actions in less than half of the cases reviewed where cardholders wrote NSF checks, or their accounts were charged off or placed in salary offset. GAO also found that over half of the cases reviewed involved individuals who still had secret or top-secret security clearances. Other control weaknesses related to the Air Force's failure to provide the necessary agency program coordinator training, and infrequent or nonexistent monitoring of travel card activities. The recently enacted fiscal year 2003 Defense appropriations and authorization acts require the Secretary of Defense to establish guidelines and procedures for disciplinary actions and to deny issuance of travel cards to individuals who are not creditworthy.
SSA’s programs touch the lives of almost every individual in this country. Its Old Age, Survivors, and Disability Insurance (OASDI) programs—which comprise what is commonly called Social Security—provide benefits to retired and disabled workers and their dependents and survivors; its Supplemental Security Income (SSI) program provides assistance to aged, blind, and disabled individuals with limited income and resources. In addition to paying benefits, SSA issues Social Security numbers to eligible individuals and maintains and provides earnings records for individuals working under employment covered by the program. SSA also helps process claims for black lung benefits and provides support to other programs, such as Medicare, Medicaid, and Railroad Retirement. More than 50 million beneficiaries receive benefits and services under SSA’s programs, which in fiscal year 1996 accounted for $386 billion—nearly one-quarter of the nation’s $1.6 trillion in federal expenditures. SSA administers its programs through five core business processes—enumeration, earnings, claims, postentitlement, and informing the public. Through these processes, as shown in table 1, SSA processes claims for benefits, adjudicates appeals on disputed decisions, and handles the millions of actions required each year to keep beneficiary records current and accurate. SSA serves the public through its central office in Baltimore, Maryland, and a network of field offices that includes 10 regional offices, approximately 1,300 field offices, and a nationwide toll-free telephone number. Field offices are located in cities and rural communities across the nation and are the agency’s physical point of contact with beneficiaries and the public. SSA also depends on 54 state DDS offices, along with one federally administered DDS, to help process claims under its disability insurance programs. State DDSs provide crucial support to the initial disability claims process—one that accounts for a large proportion of SSA’s workload—through their role in determining an individual’s medical eligibility for disability benefits. DDSs make decisions regarding disability claims in accordance with federal regulations and policies; the federal government reimburses 100 percent of all DDS costs in making disability determination decisions. The DDSs, during fiscal year 1996, processed more than 2 million initial disability determination claims. The process begins when individuals apply for disability benefits at an SSA field office, where determinations are made on whether they meet nonmedical criteria for eligibility. The field office then forwards these applications to the appropriate state DDS, where a disability examiner collects the necessary medical evidence to make the initial determination of whether the applicant meets the definition of disability. Once the applicant’s medical eligibility is determined, the DDS forwards this decision to SSA for final processing. Both SSA and the DDSs rely on information systems to support the processing of benefits. SSA uses an information processing network that links its distributed (field level) operations with its centralized mainframe computers at headquarters. Each core process is supported by hundreds of software programs that enable field office staff to perform data collection and on-line editing of client information, using either terminals or recently installed personal computers that communicate with SSA’s centralized mainframe computers. These mainframe computers establish and update beneficiary claims, process applications for Social Security numbers, and establish and maintain individuals’ earnings histories. SSA’s Chief Information Officer (CIO) provides primary oversight of the agency’s information systems investments; the Office of the Deputy Commissioner for Systems (referred to as the Office of Systems) is responsible for managing all facets of information systems planning, development, acquisition, and operation. State DDSs rely primarily on their internal systems to process medical determinations. In general, DDS computers are comprised of unique state-owned hardware of various ages and stages of completion and with differing capacity and maintenance levels. Similarly, the types of systems and levels of software used vary according to individual state needs. The majority of the DDSs—42 of the 54—use software developed by two private contractors, while the remaining 12 DDSs—referred to as independent DDSs—either process disability claims manually or use software that they have developed. DDS systems are linked to SSA’s mainframe computers via the National Disability Determination Service System (NDDSS). Records are established on the NDDSS through direct input by DDS staff or by uploading data from local databases. Since 1992, SSA’s Office of Systems has been responsible for disability system development. The office serves as the focal point for all disability-related hardware and software initiatives for the DDSs and is responsible for ensuring the integration of these activities on an enterprise basis. Because of its heavy reliance on technology, the Year 2000 problem presents SSA with the enormous challenge of reviewing all of its computer software and making the conversions required to ensure that its systems can handle the first change to a new century since the computer age began. The CIO has overall responsibility for the Year 2000 program; however, day-to-day responsibility for ensuring that changes are made to all systems used by SSA and the DDSs to support core business processes resides with the Office of Systems. In assessing the actions taken by SSA to address the Year 2000 problem, we reviewed numerous documents, including its Year 2000 tactical plan, systems inventories, test plans, and implementation schedules. We also analyzed internal tracking reports developed by the agency to monitor the progress of its Year 2000 activities, as well as its Year 2000 quarterly reports submitted to the Office of Management and Budget (OMB). We discussed SSA’s Year 2000 program activities with officials in various headquarters offices, including the Offices of the Deputy Commissioners for Systems; Operations; Finance, Assessment, and Management; and Programs and Policy. We also met with management and staff at SSA’s program service centers in Birmingham, Alabama, and Philadelphia, Pennsylvania, and at its regional office in Atlanta, Georgia. In addition, we examined Year 2000 program activities at DDS offices in Albany, New York; Birmingham, Alabama; and Decatur, Georgia. We also interviewed representatives of the two private contractors responsible for performing Year 2000 work at most of the DDSs. We used our Year 2000 assessment guide in evaluating SSA’s and the DDSs’ readiness to achieve Year 2000 compliance. We conducted our review from January 1997 through September 1997, in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Social Security or his designee. The Commissioner provided written comments, which are discussed in the “Agency Comments” section and are reprinted in appendix I. At 12:01 a.m. on January 1, 2000, many computer systems worldwide could malfunction or produce inaccurate information simply because the date has changed. Unless corrected, such failures could affect SSA benefits payments received by millions of Americans. The problem is rooted in how dates are recorded and computed. For the past several decades, systems have typically used two digits to represent the year—such as “97” for 1997—to save electronic storage space and reduce operating costs. In such a format, however, 2000 is indistinguishable from 1900. As an example of the potential impact of this ambiguity, a beneficiary born in 1925 and therefore turning 75 in 2000 could be seen as being negative 25 years old (if “now” is 1900)—not even born yet—and therefore ineligible for benefits that the individual had been receiving. Correcting this problem will not be easy or inexpensive and must be done while such systems continue to operate. Many of the government’s computer systems were developed 20 to 25 years ago, use a wide array of computer languages, and lack full documentation. Systems may contain up to several million lines of software code that must be examined for potential date-format problems. The enormous challenge involved in correcting these systems is primarily managerial. Agencies’ success or failure will be determined largely by the quality of their program management and executive leadership. Top agency officials must understand the importance and urgency of this undertaking and communicate this to all employees. The outcome of these efforts will also depend on the extent to which agencies have institutionalized key systems-development and program-management practices, and on their experience with such large-scale software development or conversion projects. Accordingly, agencies must assess their information resources management capabilities and, where necessary, upgrade them. In so doing, they should consider soliciting the assistance of other organizations experienced in these endeavors. To assist agencies with these tasks, our assessment guide discusses the scope of the challenge and offers a structured, step-by-step approach for reviewing and assessing an agency’s readiness to handle the Year 2000 problem. The guide describes in detail five phases, each of which represents a major Year 2000 program activity or segment. These are the following: Awareness. This is a critical first step. Although many people may have heard about a Year 2000 problem, they may not know what it entails or why it matters. For agency personnel, this knowledge is imperative. This is also the phase in which the team within the agency that will take the lead in correcting the problem is identified. The team then examines the problem’s potential impact, gauges the adequacy of agency resources, develops a strategy, and secures strong, visible executive support. Assessment. The main thrust of this phase is separating mission-critical systems—which must be converted or replaced—from important ones that should be converted or replaced and marginal ones that may be addressed now or deferred. Since the Year 2000 problem is primarily a business problem, it is essential to assess its likely impact on the agency’s major business functions. Following this, information systems in each business area should be inventoried and prioritized; project teams are then established and program plans devised. Testing strategies must be identified, and contingency planning must be initiated as well. Renovation. This phase deals with actual changes—converting, replacing, or eliminating selected systems and applications. In so doing, it is important to consider the complex interdependencies among them. Changes must be consistent agencywide and information about them clearly disseminated to users. Validation. Here, agencies test, verify, and validate all converted or replaced systems and applications, ensuring that they perform as expected. This critical phase may take over a year and consume up to half of the Year 2000 program’s budget and resources. It is essential that agencies satisfy themselves that their testing procedures can meet the challenge and that their results can be trusted. Implementation. Deploying and implementing Year 2000 compliant systems and components requires extensive integration and acceptance testing. And since not all agency systems will be converted or replaced simultaneously, it may be wise to operate in a parallel processing environment for a time, using old and new systems side by side. Such redundancy can act as a fail-safe mechanism until it is clear that all changed systems are operating correctly. In February 1997 OMB, in consultation with the CIO Council, set governmentwide Year 2000 program milestones for completing the majority of the work in each phase of an agency’s Year 2000 activities. According to OMB’s schedule, the assessment phase for mission-critical systems, including performing an enterprisewide inventory, was to be completed by the end of June 1997. SSA began examining the Year 2000 problem almost a decade ago and since then has taken various steps to raise agency awareness of the issue. In addition, it has made significant progress in assessing and renovating much of the software on its centralized mainframe systems—the systems that are essential to processing beneficiary claims and providing other services vital to the public. SSA first became aware of the Year 2000 problem in 1989, when one of the systems supporting its OASDI program experienced problems projecting dates past 1999. Drawing from its experiences in addressing this problem, SSA’s Office of Systems took the lead in raising awareness of the Year 2000 issue and its potential magnitude and impact on the agency’s operations. As part of these efforts, the Office of Systems developed a Year 2000 tactical plan that presented the agency’s strategy for addressing the problem. It also established a committee composed of senior management to gain executive support for the project’s activities, as well as a Year 2000 project team with responsibility for coordinating and reporting on the status of activities. During its assessment phase, SSA completed key steps necessary for determining the extent to which its centralized mainframe systems were Year 2000 compliant. These steps included developing an inventory of these systems, procuring a software tool to assist in identifying date fields that needed changing, and developing program plans and schedules for addressing these systems. During this phase, SSA also established a strategy for testing its system solutions. According to the Assistant Deputy Commissioner for Systems, SSA’s overall approach gave highest priority to the major databases and mainframe systems developed and centrally managed by the Office of Systems because systems officials believed that these systems contained about 95 percent of all of the agency’s mission-critical software. The Assistant Deputy Commissioner defined the agency’s mission-critical software as being that which directly or indirectly affects SSA’s core business processes, such as the processing and issuance of monthly beneficiary checks. According to internal reports generated to track SSA’s progress, these systems have about 24,000 software modules and approximately 34 million lines of computer code. At the time of our review, SSA had made significant progress in the renovation of its mission-critical mainframe systems. Specifically, SSA reported that it had completed renovation and regression testing for almost 80 percent of its software modules. In addition, it had developed a Year 2000 test facility, as well as plans for conducting forward-date and integration testing. SSA expects all of its mission-critical systems to be certified as Year 2000 compliant and implemented by January 1999. An agencywide assessment and inventory of information systems and their components provide the necessary foundation for detailed Year 2000 program planning. A thorough analysis and inventory ensure that all systems are identified and linked to a specific business area or process and that all crosscutting systems are considered. Without a complete agencywide assessment, SSA cannot give full consideration to the extent of its Year 2000 problem and the level of effort required to correct it. Moreover, until such an assessment has been completed, SSA increases the risk that benefits and services will be disrupted. SSA did not include the DDS systems in its initial assessment of systems that it considered a priority for correction. SSA acknowledges that these systems are mission-critical because of their importance in determining whether an individual is medically eligible to receive disability payments. Accordingly, in December 1996 SSA began taking steps to assess the level of effort required to address the Year 2000 problem at the DDSs. These steps included contracting with the two vendors that originally installed software in 42 of the 54 state DDSs to inventory, assess, renovate, and test this software for Year 2000 compliance. Within these offices, the contractors also are responsible for ensuring that the production databases and NDDSS interfaces are Year 2000 compliant. SSA will require the 12 independent DDSs whose software was not installed by these contractors to perform their own corrective actions or, in a limited number of cases, will perform corrective actions for them. Even with Year 2000 action now underway, however, the potential magnitude of the DDS problem makes systems correction by January 1, 2000, a high-risk area. In particular, although Office of Systems personnel believe that their assessment of centralized mainframe systems considered about 95 percent of the agency’s mission-critical software, inventories and assessments for most DDSs have not yet been completed. SSA therefore cannot yet know the full level of effort that will be required to make these mission-critical systems Year 2000 compliant. Estimates of the amount of software used by the DDSs suggest that extensive work would be necessary to make them Year 2000 compliant. Specifically, according to representatives of the two contractors, among the 42 DDSs for which they are responsible, about 33 million lines of software code must be considered for Year 2000 changes. They explained that because the software used by these DDSs to process disability claims has been modified over time to meet individual state needs, 42 different systems must essentially be assessed. In addition, although SSA did not have information on the total amount of disability software used by the independent DDSs, officials in just one of the offices that we visited said that they will have to review approximately 600,000 lines of code, involving over 400 programs, to determine where corrective action is needed. Because DDS operations are vital to SSA’s ability to process initial disability claims, it is important that these systems be addressed as soon as possible. Disruptions to this service due to incomplete Year 2000 conversions will prevent or delay SSA’s assistance to millions of individuals across the country. In discussing the status of Year 2000 activity for the DDSs, SSA’s Assistant Deputy Commissioner for Systems acknowledged the need for more diligence in assessing and renovating the states’ systems and said that SSA oversight of this work will increase. An essential yet challenging aspect of SSA’s Year 2000 work will be ensuring that data exchanges with other federal and state agencies and businesses are Year 2000 compliant. This will not be easy, and cooperation and assistance from other agencies and organizations will be crucial. However, given the vast number of entities with which SSA exchanges data, it is a necessary step to avoid having SSA’s own data corrupted by noncompliant information from other sources. SSA recognizes the importance of this matter and has taken a number of steps to address it. Because many of these steps were under development at the time of our review, we could not judge their effectiveness. As the year 2000 rapidly approaches, however, SSA must be diligent in implementing measures to monitor progress in this area and, where necessary, protect the integrity and usefulness of its data. At the same time, SSA needs to have contingency plans to ensure that strategies exist for mitigating any risks associated with this and any of the other Year 2000 related issues that can affect the agency’s ability to provide Social Security and other benefits and services to the public. In addressing the Year 2000 problem, agencies need assurance that data received from other organizations are accurate. Even if an agency has made its own systems Year 2000 compliant, they can still be contaminated by incorrect data entering from external sources. To combat this, agencies must inventory and assess all internal and external data exchanges and coordinate Year 2000 compliance activities, including, if necessary, the development of appropriate bridges to maintain the integrity of replaced or converted systems and the data within them. SSA exchanges data files with hundreds of federal and state agencies and thousands of businesses. These files contain data from such organizations as the Internal Revenue Service, the Department of the Treasury, and the states. Such exchanges may involve, for example, data reported on individuals’ tax-withholding forms, or data pertaining to state wages and unemployment compensation. Unless SSA is able to exchange data that is Year 2000 compliant, program benefits and eligibility computations that are derived from the data provided through these exchanges may be compromised and SSA’s databases corrupted. SSA has for some time recognized the seriousness of this problem and is taking action to address it. In 1995, it began sending letters to its data exchange partners to advise them of the Year 2000 issue and the agency’s plans for addressing it. During our review, SSA was in the process of coordinating with external organizations on issues concerning data formats, schedules for conversion and completion, and the need for bridging to enable the exchange of data that are not compliant. In addition, to facilitate data exchange compliance, SSA has developed a database that maintains information on the status of compliance activities related to all of its incoming and outgoing file exchanges. At the time of our review, this database contained information on over 6,700 files that are exchanged with external organizations. Given the magnitude of its data exchanges, one of SSA’s biggest challenges will be coordinating its compliance work with that of its exchange partners and, where necessary, developing mechanisms to ensure the continued processing of its data. It will be critical for SSA to protect against the potential for introducing and propagating errors from one organization to another. In discussing SSA’s strategy for addressing this matter, the Assistant Deputy Commissioner for Systems stated that priority will be given to ensuring the compliance of data files received from external sources that affect SSA’s ability to process and pay benefits. SSA has identified approximately 100 files in this category, although the Year 2000 project director stated that this number could change as SSA continues to review and include compliance information in its tracking system. Further, because the accuracy of the data SSA receives is as important as whether the data are presented in the correct format, the Assistant Deputy Commissioner for Systems said that SSA plans to develop, and subject all incoming data files to “reasonableness” edit checks. These are positive steps on SSA’s behalf to ensure the integrity and accuracy of its data after the year 2000 arrives. However, SSA must be diligent in implementing strategies and measures that facilitate its coordination of compliance activities with other agencies and that give it precise knowledge of the status of its data exchanges. Contingency planning is essential to Year 2000 risk management. It is the mechanism by which an organization ensures that its core business processes will continue if corrective work has not been completed. Agencies should develop realistic contingency plans, including the use of manual or contract procedures, to ensure the continuity of their major business processes. At the time of our review, SSA officials acknowledged the importance of contingency planning but had not developed specific plans to address how SSA would continue to support its core business processes if its Year 2000 conversion activities experienced unforeseen disruptions. SSA officials believe that the agency’s early start in addressing the initiative will ensure that all systems are converted before any system failures are experienced. In addition, SSA did not believe it had an alternative to completing its Year 2000 work on time since it cannot process and ensure the payment of benefits without its many integrated systems. In response to our concerns regarding the need for such plans, however, the Assistant Deputy Commissioner for Systems said that SSA will develop contingency plans to ensure the continued operation of systems supporting its core business processes. In this regard, SSA established a Year 2000 contingency workgroup and has begun outlining a contingency strategy for these processes. Like other federal agencies, SSA is vulnerable to systems failures resulting from the computer software changes necessitated by the new millennium. Given that SSA’s programs touch virtually all of us, it is especially vital that this agency make sufficient plans to ensure that it achieves Year 2000 compliance on time. SSA has made significant progress in addressing many of the systems that are critical to its mission and is regarded by many as a leader in the federal arena. Nonetheless, the agency is at risk of not being able to adequately process disability benefits at the turn of the century because it has not assessed and corrected systems used by the state DDS offices to support the processing of initial disability claims. Within the last year, SSA has begun to address the DDS issue. But until it has made a full assessment of these systems, it will not know the magnitude of the problem and, therefore, the level of effort required to correct it. Further, while SSA officials clearly recognize the importance of solving the Year 2000 problem, to reduce the risk of failure with its own effort, it is vital that the agency take every measure possible to ensure that it is well positioned to deal with unexpected problems and delays. This includes promptly addressing critical data exchange issues as well as implementing Year 2000 contingency planning. In light of the importance of SSA’s function to most Americans and the risks associated with its Year 2000 program, we recommend that the Commissioner of Social Security direct SSA’s Chief Information Officer, in conjunction with the Deputy Commissioner for Systems, to take the following actions: Require expeditious completion of the assessment of mission-critical systems at all state DDS offices and use the results of this assessment to develop a Year 2000 plan that identifies, for each system, the specific tasks and resources required and specific schedules and milestones for completing all tasks and phases of the conversion for each state system. Strengthen SSA’s monitoring and oversight of all state DDS Year 2000 activities, including ensuring that all conversion milestones are met and that contractors and independent states submit biweekly reports that identify progress against milestones in renovating all claims processing software, databases, and data interfaces. Include in SSA’s quarterly reports to OMB information on the status of DDS Year 2000 activities. Require expeditious completion of the agency’s Year 2000 compliance coordination with all data exchange partners and of efforts to include specific information on the status of compliance activities in the automated data exchange tracking system. SSA should then use this system to measure and report on the progress and coordination of its data exchange compliance activities. Develop contingency plans that articulate specific strategies for ensuring the continued operation of core business functions if planned corrections are not completed in time or if systems fail to operate as intended. These plans should fully consider the disability claims processing functions within the DDSs and the development and activation of manual or contract procedures, as appropriate. In commenting on a draft of this report, SSA agreed with all five of our recommendations and identified specific actions that it will take to ensure an adequate transition to the year 2000. SSA also offered a specific comment directed to particular language in the draft report, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time, we will provide copies to the Commissioner of Social Security; the Director, Office of Management and Budget; appropriate congressional committees; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 or by e-mail at willemssenj.aimd@gao.gov if you have any questions concerning this report. Major contributors to this report are listed in appendix II. The following is GAO’s comment on the Social Security Administration’s letter of October 2, 1997. 1. Report revised to reflect SSA’s comment. Valerie C. Melvin, Assistant Director Mirko J. Dolak, Technical Assistant Director William G. Barrick, Senior Information Systems Analyst Michael A. Alexander, Senior Information Systems Analyst William N. Isrin, Operations Research Analyst Michael P. Fruitman, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Social Security Administration's (SSA) actions to achieve Year 2000 information systems compliance, focusing on the adequacy of steps taken by SSA to ensure that computing problems related to the year 2000 are fully addressed, including its oversight of state disability determinations services' (DDS) Year 2000 program activities. GAO noted that: (1) SSA first recognized the potential impact of the Year 2000 problem almost a decade ago, and was able to launch an early response to this challenge; (2) it initiated early awareness activity and has made significant progress in assessing and renovating mission-critical mainframe software that enables it to provide social security benefits and other assistance to the public; (3) because of the knowledge and experience gained through its early Year 2000 efforts, SSA has come to be regarded as a federal leader in addressing this issue; (4) SSA's Assistant Deputy Commissioner for Systems chairs the Chief Information Officers Council's Subcommittee on the Year 2000 and works with other federal agencies to address Year 2000 issues across government; (5) while SSA deserves credit for its leadership, the agency remains at risk that not all of its mission-critical systems--those necessary to prevent the disruption of benefits--will be corrected before January 1, 2000; (6) at particular risk are the systems that have not yet been assessed for the 54 state DDSs that provide vital support to SSA in administering its disability insurance programs; (7) private contractors SSA hired to make 42 of the 54 state DDS systems Year 2000 compliant reported that these offices had at least 33 million additional lines of software code that must be assessed and, where necessary, renovated; (8) given the potential magnitude of this undertaking, SSA could face major disruptions in its ability to process initial disability claims for millions of individuals throughout the country if these systems are not addressed in time for corrective action to be completed before the change of century; (9) SSA also faces the challenge of ensuring that its critical data exchanges with federal and state agencies and other businesses are Year 2000 compliant; (10) it has taken a number of positive steps in this direction, such as identifying incoming and outgoing file exchanges with the external business community and developing a database to maintain information on the status of compliance activities; (11) however, because SSA must rely on the hundreds of federal and state agencies and the thousands of businesses with which it exchanges files to make their systems compliant, SSA faces a definite risk that inaccurate data will be introduced into its databases; and (12) that risk could be magnified if SSA does not develop contingency plans to ensure the continuity of its critical systems and activities should systems not be corrected in time.
The Corps is the world’s largest public engineering, design, and construction management agency. Located within the Department of Defense, the Corps has both military and civilian responsibilities. Through its Civil Works Program, the Corps plans, constructs, operates, and maintains a wide range of water resources projects. The Corps’ Civil Works Program has nine major functional areas, also known as business lines: Navigation, Flood Risk Management, Environment, Recreation, Hydropower, Water Supply, Emergency Management, Regulatory Program, and Support for Others.organized into three tiers: a national headquarters in Washington, D.C., 8 regional divisions, and 38 local district offices (see fig. 1). The major steps in developing a Corps construction project are shown in figure 2. Usually, the Corps becomes involved in water resource construction projects when a local community perceives a need or experiences a problem that is beyond its ability to solve and contacts the Corps for assistance. If the Corps does not have the statutory authority required for studying the problem, the Corps must obtain authorization from Congress before proceeding. Studies have been authorized through legislation, typically a WRDA, or, in some circumstances, through a committee resolution by an authorizing committee. Next, the Corps must receive an appropriation to study the project, which it seeks through its annual budget request to Congress. Under WRDA 2007 amendments, after receiving authorization and an appropriation, studies were conducted in two phases: reconnaissance and feasibility. at full federal expense to determine if the problem warranted federal participation in a feasibility study and how the problem could be addressed. During the reconnaissance phase, the Corps also assessed the level of interest and support from nonfederal entities such as state, tribal, county, or local governments or agencies that may become sponsors. If the Corps determined that further study was warranted, the district office typically sought agreement from the local sponsor to share costs for a feasibility study. WRRDA 2014 eliminated the reconnaissance phase to accelerate the study process and allow the Corps to proceed directly to the feasibility study. The conference report accompanying WRRDA 2014 also states that the Corps may terminate a study when it is clear there is no demonstrable federal interest for a project or that construction of the project is not possible for technical, legal, or financial reasons.guidance on the elimination of the reconnaissance phase. Pub. L. No. 110-114, § 2043(b), 121 Stat. 1041 (2007). the problem and make recommendations on whether the project is worth pursuing and how the problem should be addressed. Corps guidance states that typical feasibility studies should be completed in 18 to 36 months. According to Corps documents, the district office conducts the study and the needed environmental studies and documents the results in a feasibility report that includes a total project cost estimate based on the recommended plan. The Chief of Engineers reviews the report and decides whether to sign a final decision document, known as the Chief’s Report, recommending the project for construction. The Chief of Engineers transmits the Chief’s Report and the supporting documentation to Congress through the Assistant Secretary of the Army for Civil Works and the Office of Management and Budget. Congress may authorize the project’s construction in a WRDA or other legislation. When Congress approves a project for construction, it typically authorizes a total cost for the project based on estimates prepared by the Corps. Most construction projects are authorized during the preconstruction engineering and design phase. The purpose of this phase is to complete any additional planning studies and all of the detailed, technical studies and designs needed to begin construction of the project. Once the construction project has been authorized and preconstruction engineering and design has been funded through completion of the plans and specifications for the first construction contract, the Corps seeks funds to construct the project through the annual budget formulation process. As part of the budget process, the Army, with input and data from Corps headquarters, division, and district offices, develops a budget request for the agency. Beginning in fiscal year 2006, the Corps introduced what it refers to as performance-based budgeting as a way to focus funding requests on those projects with the highest anticipated return on investment, rather than a wider set of projects that meet budget policies as it sought to do in the past. Under its current budget formulation process, the Corps uses performance metrics to evaluate projects’ estimated future outcomes and gives priority to those it determines have the highest expected returns for the national economy and the environment, as well as those that reduce risk to human life. Budget justification materials are provided to the House and Senate Appropriations Committee for consideration. Through the conference committee reports accompanying appropriations acts, Congress directs funds for individual projects in increments over the course of several years. The Corps considers a project or study to have been appropriated funds if the project or study has received such direction in a committee report. If the project has been appropriated funds, the district enters into a cost-sharing agreement with the nonfederal sponsor. Once funds have been appropriated and a cost-sharing agreement is in place, the construction phase can begin and the Corps may obligate funds for a project. Construction is generally managed by the Corps but performed by private contractors. During construction, the Corps may request and Congress may enact scope or cost changes. Under current federal statute, the process for deauthorizing construction studies is initiated if the study has not been appropriated funds for 5 consecutive fiscal years. Specifically, the Secretary of the Army is required to annually transmit to Congress a list of water resources studies that have not been completed and have not been appropriated funds in the last 5 full fiscal years.that list to appropriate funds, or the study is deauthorized. Congress has 90 days after the submission of Current federal statute also requires a similar deauthorization process for construction projects. The Secretary of the Army is required to transmit to Congress a list of projects—or separable elementshad funds obligated for 5 full consecutive fiscal years. Beginning with WRDA 2007, this list was required to be sent to Congress annually; prior —that have not to WRDA 2007, the list was required biennially.obligated for planning, design, or construction of a project on that list during the next fiscal year, the project is deauthorized, and the Secretary of the Army is to publish the list of deauthorized projects in the Federal Register. The Corps’ report of a $62 billion backlog list of more than 1,000 projects is incomplete because the agency does not track all of its authorized construction projects and studies. Specifically, the Corps does not enter all authorized projects and studies into its databases because of the absence of a policy to do so. As a result, we found the Corps’ reported backlog list likely underestimates the complete construction backlog. Without having complete information on its backlog, the Corps does not know the full extent of unmet water resources needs of the nation, and Congress does not have complete information to make informed decisions on project and study authorizations and appropriations. We found that the Corps’ reported backlog likely under-represents the complete backlog of construction projects in terms of both cost and number of projects. According to Corps headquarters officials, the backlog list is manually maintained by one staff person as a secondary duty. Our past work has found that using manual processes to maintain data can hinder an organization’s ability to ensure that data are complete and accurate. Corps officials said, and our review found, that some projects that were authorized are included on the backlog list, but not their associated cost, therefore raising questions about the validity of the $62 billion estimate. For example, the Amite River and Tributaries, Louisiana, East Baton Rouge Parish Watershed project was authorized in WRDA and modified most recently in WRDA 2007 for a total cost of $187 1999million, but according to Corps officials, construction funds have not been appropriated for this project. Although the project’s name appears on the Corps’ backlog list, there is no dollar amount associated with that project, so the cost is not included in the Corps’ reported backlog list. We found a total of 12 projects authorized in WRDA 1999 that are included in the Corps’ reported backlog list but do not have an associated cost. However, internal control standards in the federal government call for agencies to clearly and promptly document transactions and other significant events from authorization to completion. Corps headquarters officials acknowledged that information was missing from their databases and said they do not currently have an estimate for the cost or number of projects that are not included in their databases. Corps headquarters officials told us that the agency does not have a policy instructing district offices to enter projects that are authorized but have not been appropriated funds into their databases, and it is left to the discretion of the district offices to do so. Officials from 1 of the 16 district offices we spoke with said the district has developed guidance to enter all authorized projects into the Corps’ centralized databases, regardless of whether the projects had funds appropriated. Officials at the 15 other district offices told us they enter projects into the Corps’ databases only after funds are appropriated. Corps headquarters officials said that the agency’s databases were created primarily as project management databases, and therefore, projects are generally not entered into the databases until they are active and funds are appropriated. However, federal standards for internal control call for agencies to document internal control in management directives, administrative policies, or operating manuals and be readily available for examination. We also have previously found that it is important to have agencywide policies and procedures to help ensure consistent treatment, especially if employees are geographically dispersed. Without written policies or guidance, Corps district offices will likely continue to inconsistently enter projects that are authorized but not funded into their databases, and that will continue to result in incomplete data. In the absence of authorized projects not consistently being entered into the Corps’ centralized databases, officials from 10 of the 16 district offices we spoke with said they maintained their own lists of authorized projects, including those that were authorized but did not have funds appropriated. Officials from some of these districts said that they do so in order to maintain contact with nonfederal sponsors and so that they have complete project information for budget presentation preparations. Officials from two district offices we interviewed said that they do not maintain a list of authorized projects that did not have funds appropriated, but nonfederal sponsors often contact them regarding these projects, so the officials were aware of them. Officials from three districts we interviewed said they do not maintain a list of all authorized projects in their district and are unable to estimate how many projects from their district are not included in the Corps’ databases. Officials in one of these districts said that they are unaware of the number of projects that have been authorized and not funded but estimated the number to be large. The Corps’ reported backlog does not include studies. Corps officials stated the agency does not track a backlog of all authorized studies, nor does it have a policy instructing districts to do so, due to manpower and resource constraints. However, because federal statute requires the Corps to submit a list to Congress of incomplete water resources studies for which no funds have been appropriated for 5 full fiscal years, the Corps needs to know which studies are eligible for deauthorization. Without having this data, the Corps cannot comply with the requirement to submit a list to Congress identifying studies for deauthorization that have not had funds appropriated for 5 fiscal years. Without having a complete backlog list of projects and studies, it is difficult for the Corps to know the full universe of unmet water resources needs in the country. Our prior work also found that the Corps’ budget presentation is not transparent and only includes information on the According to projects the President proposes to fund in the budget year.that work, congressional users of the Corps’ budget presentation said that not having information on all projects limits the ability of Congress to make fully informed decisions. Similarly, WRDA 2007 required the Corps to submit an annual fiscal transparency report, including a list of all projects that have been authorized but for which construction is not complete. The Corps has not submitted this report. The Corps estimates it will submit the comprehensive backlog report of projects required in WRRDA 2014 by March 2015, once it completes its new database that is discussed below. Until the Corps submits such a report to Congress, lawmakers will not have complete information to make informed decisions on construction project and study authorizations and appropriations. Corps headquarters officials recognize that they are missing project backlog data for some authorized projects and have begun to implement an initiative known as the Smart Use of Systems Initiative, which is designed to add projects to a new agency database. One of the goals of this initiative is to create a database to include all authorized projects. Headquarters officials said the agency hired a contractor in February 2014 to create an inventory of all projects that were authorized since the passage of WRDA 1986. This inventory is a major component of a new, centralized project database called the Civil Works Integrated Funding Database. They said to create this inventory, the contractor will search WRDA 1986 and other legislation, such as appropriations acts, that may include project authorizations, and then match those projects with information contained in the Corps’ databases. Officials said this process will require the contractor to work closely with Corps staff because projects may have different names in legislation than the project names contained in the Corps’ databases. According to Corps headquarters officials, once the contractor completes the inventory of all projects authorized since WRDA 1986, Corps headquarters officials will add those projects authorized prior to WRDA 1986. Corps headquarters officials said that once the new database has been implemented, district or headquarters officials will be required to enter data on new construction projects following authorization. As of the end of June 2014, Corps headquarters officials said that the contractor has completed the initial phase of the inventory of projects authorized since WRDA 1986 and that the contractor is updating the inventory based on comments from Corps headquarters officials. These officials estimate the Civil Works Integrated Funding Database will contain all authorized projects by the end of the 2014 calendar year. Officials said the inventory will not include authorizations for studies and have not determined what, if any, mechanisms they would put in place to track these studies. However, federal internal control standards call for agencies to have mechanisms in place to appropriately document transactions and other significant events. The Corps has not identified all eligible construction projects and studies for deauthorization and has not complied with statutory requirements to notify Congress of all projects and studies eligible for deauthorization. As discussed earlier, the Corps does not require its district offices to enter all authorized projects into its databases; therefore, the agency is unlikely to identify as eligible for deauthorization those projects that are excluded from the database and have not had funds obligated for 5 fiscal years. In addition, the Corps has not complied with its statutory requirements to notify Congress of all projects that have not had funds obligated in 5 fiscal years and cannot demonstrate it has notified Congress of projects eligible for deauthorization on an annual basis. Moreover, the Corps has not notified Congress of eligible studies for deauthorization as required by statute. As discussed earlier, not all projects are included in the Corps’ databases because the agency does not have policies and procedures in place to enter all authorized projects; therefore, some projects that have not had obligations in 5 fiscal years are unlikely to appear on the Corps’ list of projects eligible for deauthorization. Corps headquarters officials said that the project deauthorization process begins when Corps headquarters officials and contractors query the agency’s centralized project databases to identify any project that has not had obligations in the previous 5 fiscal years. Corps headquarters officials then send a memorandum (deauthorization memorandum) outlining statutory deauthorization provisions for projects along with the draft list of projects that are eligible for deauthorization to the division offices, which in turn are to send the list to the district offices for verification, according to these officials. As part of this effort, district offices are to verify, among other things, the project name, the last year the project had funds obligated, whether it met deauthorization criteria as outlined in statute, and an explanation of why the project has not had funds obligated. As stated previously, the Corps does not generally enter projects into its databases until funds are appropriated, therefore, the Corps’ list of projects eligible for deauthorization is unlikely to contain those authorized projects that have not been appropriated funds nor obligated funds within 5 full fiscal years, as required by statute. Although Corps headquarters officials said that this deauthorization process occurs annually, headquarters officials provided us with the lists of projects that were verified and returned by the division and district offices for one year (2012). The deauthorization memorandum instructs the district offices to review and verify the information contained on the draft list. Headquarters officials said that district officials also are to add information on the year in which the project was authorized to the list of eligible projects, but that information is not currently included in the Corps’ databases. However, the deauthorization memorandum does not specify that district offices are to add projects missing from the list that have not had funds obligated for 5 years. Officials we interviewed from 5 of the 16 Corps district offices in our review said they do not attempt to identify and add projects to the draft list because they were not aware that they were to do so. Officials from two other district offices said their division does not send the draft list to them unless there are projects for that district listed, so there would not be an opportunity for these district offices to add projects in such situations. However, officials from three other district offices we spoke with added projects to the headquarters draft list. For example, Charleston district officials said they added seven projects to the 2012 headquarters draft list that were authorized in WRDA 2007 but had not had funds appropriated and therefore did not have funds obligated. However, neither Corps headquarters nor the Assistant Secretary of the Army for Civil Works transmitted a list to Congress for projects eligible for deauthorization for fiscal year 2012 as required under statute. The Corps has not consistently complied with statutory deauthorization notification requirements. Specifically, with respect to project notification requirements, the Corps has not notified Congress of all deauthorization eligible projects, nor has the Corps consistently provided Congress notification in the required time frames. With respect to study notification requirements, the Corps has not notified Congress of deauthorization eligible water resources studies. As stated previously, current statutory requirements provide for a project to be reported to Congress for deauthorization if such projects have not been obligated funds for 5 consecutive fiscal years, and then to be automatically deauthorized if funds are not obligated in the next fiscal year after transmittal of the list to Congress. However, Corps district officials told us that they have recommended projects that headquarters officials have identified as eligible for deauthorization not be included on the list of projects sent to Congress, even though funds were not obligated for those projects for 5 consecutive fiscal years. Specifically, officials from 6 district offices informed us that they typically add comments to a draft list asking that a project not be included on the list of projects eligible for deauthorization if a nonfederal sponsor is still interested in pursuing the project or if the district finds continued federal interest in the project. Due to staff turnover at headquarters and missing documentation on past deauthorization efforts, headquarters officials said they are unable to determine the reasons why projects were not identified as eligible for deauthorization. Moreover, Corps headquarters officials were unable to provide us with agency guidance or policy used to determine what projects they consider exempt from project deauthorization eligibility. In our analysis of the 2011 draft list of projects eligible for deauthorization sent to the district offices, we found that headquarters had included 43 projects on the draft list that had not been obligated funds from fiscal year 2007 through 2011—the 5 fiscal years preceding the date of the list for Congress. However, 41 of those 43 projects were not included in the Corps’ list of projects eligible for deauthorization that was sent to Congress. According to headquarters officials, some of the 41 projects may not have been eligible for deauthorization because, for example, they were Continuing Authorities Projects, which are not subject to deauthorization, or the project was incorporated into another ongoing project. Although Corps headquarters officials were unable to provide us with the lists that included district comments, officials from 6 of the district offices we interviewed told us that projects may be removed from consideration by headquarters if nonfederal sponsors support projects or if there is continued federal interest in projects that have not had funds obligated for 5 fiscal years, for example: The Galveston district has had a project on the Corps headquarters draft list of projects eligible for deauthorization in 2010, 2011, 2012, and 2013. Galveston district officials said the nonfederal sponsor expressed continued interest in the project and requested that the project not be deauthorized. According to Corps data, funds have not been obligated for this project since 2006 but the project has not been deauthorized. The Jacksonville district has had a project on the headquarters list of projects eligible for deauthorization in 2010, 2011, 2012, and 2013. According to Jacksonville district officials’ comments on the 2012 list, Corps data the nonfederal sponsor continued to support the project.showed that funds have not been obligated for this project since 2006 but it has not been deauthorized. The Louisville district had a project on the headquarters list of projects eligible for deauthorization in 2008 and 2009. Louisville district officials said construction on some components of the project are not yet complete because the nonfederal sponsor has not been able to contribute its portion of the funds for those components. Because the nonfederal sponsor is still interested and some construction had been completed, district officials said they did not recommend that the project be included in the list of projects eligible for deauthorization. According to Corps data, funds have not been obligated for this project since 1998 but it has not been deauthorized. The Corps’ decision to remove projects from their draft list when such projects have not had funds obligated for 5 fiscal years and thereby not notify Congress of all projects eligible for deauthorization is not consistent with statutory requirements. As a result, Congress has not received a complete list of projects eligible for deauthorization, and some projects may still be listed as authorized without being subject to deauthorization as specified in statute. Officials we interviewed from 10 of 16 district offices said that the 5-year time frame for deauthorizing projects without obligations, as specified in statute, is too short of a time frame to be eligible for deauthorization. For example, officials in 4 of the 16 district offices we interviewed cited the current economic climate, including reductions in the Corps’ budget and fewer funds available for construction projects, as reasons why a project should not be deauthorized as it might still have value to the communities after the 5-year period. Additionally, officials from 2 Corps district offices said some projects may not receive priority in the agency’s budget For example, an official from the Alaska district said that request.projects within his district tend to rank lower than projects in high-traffic ports, such as New York and Long Beach, but authorized construction projects are still important to the Alaskan community and should not be deauthorized. Reports show that having a large backlog can have negative effects. For example, a 2007 report by the National Academy of Public Administration states that a backlog complicates the budgeting process and provides an incentive to spread funding widely, over many projects, rather than to complete high priority projects that have already begun construction. That report recommended that the Corps and Congress work to eliminate the backlog of projects that have little chance of being funded. Similarly, the National Academy of Sciences reported in 2011 that the backlog leads to projects being delayed, conducted in a stop-start manner, and contributes to overall inefficient project delivery. National Academy of Public Administration, Prioritizing America’s Water Resources Investments: Budget Reform for Civil Works Construction Projects at the U.S. Army Corps of Engineers (Washington, D.C.: February 2007). Current federal statute requires the Secretary of the Army to transmit to Congress a list of authorized projects or separable elements of projects that have had no obligations during the previous 5 full fiscal years. However, Corps headquarters officials were unable to provide us with copies of most of the deauthorization lists the agency has been required to send to Congress since WRDA 1996. Specifically, the Corps located 4 lists (2006, 2010, 2011, and 2012) out of the 12 lists that were transmitted to Congress for fiscal years 1997 through 2013, as required. GAO/AIMD-00-21.3.1. policies, or operating manuals and be readily available for examination. Without having documented policies or procedures that outline the deauthorization process, Corps headquarters officials and officials from the Assistant Secretary of the Army for Civil Works may not be clear about the specific responsibilities of each office, and Congress may not be notified annually about projects eligible for deauthorization. Under what is commonly referred to as the Federal Records Act, each federal agency is required to make and preserve records. However, the Corps does not have a recordkeeping policy in place with respect to project deauthorizations, which has resulted in incomplete records of documents related to the deauthorization process, including documents sent to Congress. Without records and recordkeeping policies related to project deauthorizations, the Corps will have difficulty ensuring that its transactions related to deauthorization are done in a manner to comply with the statutory records management requirements. In addition, historical records related to project deauthorizations could be lost due to the absence of a recordkeeping policy and not be available for public access in the event of a Freedom of Information Act request. In addition to requiring the Corps to send lists of projects eligible for deauthorization to Congress, federal statute requires the publication of projects that are deauthorized in the Federal Register. According to the deauthorization memorandum, Corps headquarters officials are responsible for publishing in the Federal Register the list of projects that are deauthorized, as well as a list of projects removed from the list of projects eligible for deauthorization due to resumption of funding or reauthorization. The Corps has published 3 lists (1999, 2003, and 2009) of projects that are deauthorized in the Federal Register during the 12 fiscal years from 1997 to 2013 during which the agency was subject to the statutory project deauthorization requirements. Corps headquarters officials told us that the statute does not specify dates for publishing projects that are deauthorized in the Federal Register. In addition, Corps headquarters officials told us that the Corps has no formal written policy or guidelines consistent with federal standards for internal control, to ensure that lists of projects that are deauthorized are published in the Federal Register. Without having documented policies or procedures that outline the deauthorization process, the Corps cannot ensure that projects deauthorized by operation of the statute are published in the Federal Register as required. The Corps has not complied with statutory requirements to submit to Congress an annual list of incomplete water resources studies that have been authorized but for which no funds have been appropriated during the prior 5 full fiscal years. As discussed earlier, Corps headquarters officials told us the agency does not track studies and therefore cannot identify studies that meet deauthorization eligibility requirements. Moreover, the Corps does not require studies to be entered into its databases until funds have been appropriated. Corps headquarters officials also said the agency does not have policies and procedures outlining a process to identify and submit to Congress a list of studies eligible for deauthorization and have not submitted lists of studies eligible for deauthorization to Congress, as required by statute, due to manpower and resource constraints. Without having a mechanism to compile data on studies or a documented policy and procedures in place to deauthorize studies as noted in federal internal control standards,Corps cannot comply with deauthorization requirements for studies specified in statute, and the agency, Congress, and nonfederal sponsors have incomplete information on what is feasible to address the water resources needs of the country. The Corps’ incomplete construction backlog and declining appropriations for construction projects have left communities uncertain when or if their projects will be completed. Although the Corps has taken the initial steps of compiling a database to include all authorized projects, the agency faces challenges in identifying backlogged projects and projects eligible for deauthorization. Specifically, the agency does not have complete data on its backlogged projects, because it does not have documented policies or procedures to enter projects into its databases when authorized as called for by federal standards for internal control. Without such guidance, it is likely that the Corps will continue to have incomplete data on such projects and cannot know the full extent of the construction project backlog, making it difficult to effectively deauthorize all eligible projects and for the Corps and Congress to effectively prioritize projects and plan the agency’s work. In addition, the Corps was unable to locate all of the lists of projects eligible for deauthorization that it has been required to transmit to Congress since 1997, and the Corps has published lists of deauthorized projects in the Federal Register inconsistently during that time period. Without a recordkeeping policy in place as required by statute and without a documented policy and procedures outlining the deauthorization process consistent with federal standards for internal control, the Corps cannot ensure that projects eligible for deauthorization are submitted to Congress and that projects deauthorized by operation of the statute are published as required in the Federal Register. Furthermore, although federal statute places study-related deauthorization requirements on the Corps, the Corps has not complied with these provisions. Moreover, the Corps does not have a mechanism to compile data on studies or a documented policy and procedures for identifying eligible studies for deauthorization, as called for by federal standards for internal control. As such, the Corps, Congress, and nonfederal sponsors will not have complete information for making fully informed decisions on what is feasible to address the water resources needs of the country. To ensure that the Corps meets the statutory requirements related to deauthorization of projects, we recommend that the Secretary of Defense direct the Secretary of the Army to direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to take the following four actions: Establish and implement a written policy to ensure all authorized projects are entered into the agency’s database and tracked. Once the new database includes all authorized projects, determine what projects are eligible for deauthorization, transmit the list to Congress, and publish projects that are deauthorized in the Federal Register. Establish and implement written policies and procedures documenting the project deauthorization process, from initial compilation of a list of eligible projects to submitting the list to Congress and publishing the projects that are deauthorized in the Federal Register. Establish and implement a policy for record-keeping to ensure that documents related to deauthorization are maintained as federal records. To ensure that the Corps meets the statutory requirements related to deauthorization of incomplete water resources studies, we recommend that the Secretary of Defense direct the Secretary of the Army to direct the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers to take the following three actions: Establish a mechanism for tracking all authorized studies and establish and implement a written policy to ensure all authorized studies are tracked. Establish and implement policies and procedures documenting the deauthorization process for studies, from initial compilation of a list of eligible studies to submitting the list to Congress. Determine what studies are eligible for deauthorization and transmit the list to Congress. We provided a draft of this report for review and comment to the Department of Defense. In its written comments, reprinted in appendix II, the department concurred with our recommendations and noted that it will take steps to address those recommendations. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Army, the Chief of Engineers and Commanding General of the U.S. Army Corps of Engineers, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512- 3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report examines (1) the extent to which the Corps tracks data on its backlog of construction projects and studies, and (2) the extent to which the Corps identifies construction projects and studies eligible for deauthorization, and meets statutory deauthorization requirements. For purposes of this report, the Corps’ backlog includes any study or project that was authorized but for which the study or the construction is not yet complete. Our work focused on the deauthorization processes for construction studies and projects in fiscal years 1997 to 2013. We chose this time frame based on amendments to the deauthorization requirements enacted in WRDA 1996 and because the Corps did not have complete obligations data for fiscal year 2014 at the time of our review. To determine the extent to which the Corps tracks data on its backlog of construction studies and projects as well as the extent to which the Corps identifies eligible construction studies and projects for deauthorization, we reviewed relevant federal statutes and the Corps’ policies and procedures related to data collection and deauthorization processes. We also obtained the Corps’ obligations data for fiscal years 1997 to 2013 in an attempt to recreate the Corps’ methods to identify projects for deauthorization. However, after multiple interviews with Corps headquarters officials responsible for the agency’s databases to discuss discrepancies, we determined the data were not reliable for our purposes because not all authorized projects were contained in the databases. We found that the obligations data that the Corps had were sufficiently reliable for us to compare those projects with the projects the Corps includes in its backlog and to compare with the Corps’ draft deauthorization lists. We also reviewed data dictionaries, user guides, and other documentation that the Corps provided for the agency’s databases. We reviewed these documents to help determine how the Corps used its databases to guide its deauthorization processes and to assess data reliability. We also reviewed deauthorization documents produced by the Corps from 1997 to 2013. These documents included draft deauthorization lists created by Corps headquarters, draft deauthorization lists that were verified by the division and district offices, lists of projects eligible for deauthorization that were sent to Congress, and Federal Register notices pertaining to deauthorized projects. Corps headquarters officials located one year of draft deauthorization lists that were verified from the division and district offices. We also reviewed any draft deauthorization lists that were provided by district officials we spoke with. Corps headquarters officials provided us with four (2006, 2010, 2011, and 2012) lists of projects eligible for deauthorization the agency sent to Congress from 1997 to 2013. We interviewed Corps headquarters officials to obtain additional information on the agency’s policies and procedures for tracking its construction backlog and to determine the process the agency uses to create a list of studies and projects eligible for deauthorization. In addition, we spoke with nonfederal sponsors of Corps projects who are members of two national associations, to determine how they were affected by the Corps’ backlog and deauthorization process. We selected these associations to represent the Corps’ water resources projects and with membership that includes nonfederal sponsors of Corps water resources projects. The views of representatives from these associations are not generalizable, but they provided perspectives on the Corps’ backlog and deauthorization processes. We also interviewed officials from a nonprobability sample of 16 of 38 Corps domestic civil works district offices to determine how district offices track data on studies and projects and implement the deauthorization process. We selected a non-probability sample of district offices that met our selection criteria of (1) geographical representation of two district offices in each of the Corps’ 8 civil works division offices and (2) number of projects per district office. Specifically, we selected the district offices with the most projects and the district offices with the least projects in each of the 8 division offices, based on a list, provided by Corps headquarters officials, of construction projects by division and district. Project data was obtained from headquarters officials and included active projects in each of the Corps districts. We used this data for the purpose of selecting our non-probability sample, and determined it was sufficiently reliable for this purpose. Because this is a non-probability sample, the experiences and views of the Corps district officials are not representative of, and cannot be generalized to, all Corps districts. However, these experiences and views provide illustrative examples of how district offices track projects and implement the deauthorization process. We conducted this performance audit from July 2013 to August 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, key contributors to this report included Vondalee R. Hunt (Assistant Director), Cheryl Arvidson, Danny Baez, Elizabeth Beardsley, Cindy Gilbert, Geoffrey Hamilton, Kristin Hughes, Lisa S. Moore, Jerome Sandau, and Holly Sasso.
The Corps reports having a backlog of more than 1,000 authorized water resources construction projects in its Civil Works Program that it estimates to cost more than $62 billion to complete, as of June 2014. Federal statute requires the Corps to identify for deauthorization projects that have had no obligations for 5 years and studies that have had no appropriations for 5 years. Once a project or study is deauthorized, it must be reauthorized to begin or resume construction or study. GAO was asked to review the Corps' construction backlog and deauthorization processes. This report examines (1) the extent to which the Corps tracks its backlog of construction projects and studies, and (2) the extent to which the Corps identifies construction projects and studies eligible for deauthorization, and meets statutory deauthorization requirements. GAO reviewed legislation, Corps policy, guidance, and documentation of its backlog and deauthorization process. GAO interviewed Corps headquarters officials and officials from 16 of the Corps' 38 domestic civil works districts, selected based on geographical representation and number of projects. The U.S. Army Corps of Engineers' (Corps) backlog list of authorized water resources construction projects is incomplete because the agency does not track all authorized projects and the list does not include studies. Specifically, GAO found that the backlog does not include some projects that were authorized but were not appropriated funds. Corps headquarters officials said that the agency does not have a policy instructing its district offices to enter into their databases projects that are authorized but have not been appropriated funds and that it is up to the discretion of the district offices to do so. Corps officials also stated that the agency does not include studies on its backlog, nor does it have a policy instructing district offices to track studies. Federal internal control standards state that agencies are to document internal controls in management directives, administrative policies, or operating manuals to help ensure consistent treatment. Officials at 15 of 16 district offices told GAO that they enter projects into the databases only after funds are appropriated. The Corps has begun to take steps to include all authorized projects in a new agency database; however, this database will not include studies. Federal internal control standards call for agencies to have mechanisms to appropriately document transactions and other significant events. Without written policies requiring districts to track all projects and studies and a mechanism to track studies, the Corps may continue to have an incomplete backlog list. The absence of a complete backlog list of projects and studies will likely make it difficult for the Corps to know the full universe of unmet water resource needs of the country, and Congress to make informed decisions when authorizing projects and studies, and appropriating funds. The Corps has not identified all eligible construction projects and studies for deauthorization and has not complied with statutory requirements to notify Congress of all projects and studies eligible for deauthorization. The agency is unlikely to identify those projects that have been excluded from the databases and had no funds obligated for 5 fiscal years, because, as discussed above, the Corps does not require districts to enter all authorized projects into its databases. Officials GAO interviewed from 5 of 16 districts said they likely would not identify and add projects to the draft deauthorization eligible list because they were not required to do so. Moreover, the Corps has not complied with statutory requirements to notify Congress of all projects that have not had obligations in 5 fiscal years. Specifically, the Corps cannot demonstrate it transmitted a list of projects eligible for deauthorization 8 times in the 12 years it was required to do so since 1997. Corps headquarters officials said that the process and communication mechanisms for deauthorizing projects are not in Corps policies or procedures. Without documented policies and procedures consistent with federal standards for internal control, the Corps may continue its inconsistent publishing of deauthorization lists. In addition, the Corps has not complied with requirements to identify studies for deauthorization because officials have said the agency does not have the policies and procedures in place to do so. Without having the data, as discussed above, or policies and procedures in place to identify studies for deauthorization, the Corps and Congress will not have complete information to make decisions when prioritizing the water resources needs of the country. GAO recommends, among other things, that the Corps establish and implement policies to ensure projects and studies are tracked; establish a mechanism to track studies; and develop and implement policies to identify projects and studies that meet deauthorization criteria, and notify Congress. The Department of Defense concurred with the recommendations.
BAH, one of several components of military compensation, is intended to provide servicemembers with an allowance to enable them to obtain suitable housing when military-owned housing is not provided. Accordingly, BAH payments reflect the cost of housing where servicemembers are stationed, and the payments change annually in response to increases or decreases in local housing costs. Still, the most recent base realignment and closure (BRAC) process, among other restationing actions, will cause movement of large numbers of military personnel to communities that initially may lack enough private housing that is affordable to most servicemembers. Several HUD, USDA, and IRS rental housing programs that are intended to make housing affordable to low-income households count BAH as income when assessing the eligibility of active-duty servicemembers. BAH is one of several elements of regular military compensation. Regardless of whether they live in military-owned housing or receive BAH, servicemembers receive basic pay and a Basic Allowance for Subsistence (BAS). BAH and BAS are not subject to federal income tax. With the addition of average BAH payments to the other two pay elements, regular military compensation in 2006 starts at $26,401 for the lowest-ranking enlisted servicemembers and culminates at $183,196 for the highest-ranking officers, excluding consideration of any tax advantage because the allowances are not subject to federal income tax (see fig. 1). In addition to the primary elements of military compensation shown in figure 1, servicemembers with duty stations in more than 55 continental U.S. locations, where nonhousing expenses exceed the national average by at least 8 percent, receive a cost-of-living allowance. Servicemembers also may receive other types of pay, allowances, or bonuses, depending on their professional backgrounds, skills, or duties. For example, servicemembers may receive special pay for hardship duty or exposure to hostile fire, allowances when they are separated from their families because of a change in station or a temporary duty assignment, and bonuses for enlistment and reenlistment. According to DOD officials, in March 2006, about 950,000 personnel lived in private housing (including privatized military family housing) and received BAH—including roughly 70 percent of active-duty servicemembers in the United States, as well as some activated reservists and servicemembers stationed overseas whose dependents lived in the United States. DOD generally requires enlisted servicemembers in the lowest ranks who do not have dependents to live on base in furnished living quarters, commonly referred to as barracks. These enlisted servicemembers do not receive BAH. Each year, DOD sets BAH rates (i.e., the allowances servicemembers receive monthly) that are based on the median local monthly cost of housing, including current market rents, utilities, and renter’s insurance. The amounts that servicemembers receive also are based on their pay grades and whether they have dependents. To calculate BAH rates for different pay grades, DOD uses six standard categories of housing—ranging from an one-bedroom apartment to a four-bedroom, single-family detached house—that are intended to match the housing normally occupied by civilians with comparable incomes. DOD applies separate categories to servicemembers with and without dependents, but the number of dependents does not affect the BAH amount. BAH rates have increased since 2000 as DOD implemented an initiative to reduce servicemembers’ out-of-pocket housing costs. Prior to 2005, the BAH rate for each area and pay grade was the local median monthly housing cost minus a percentage of the national median monthly housing cost. That deduction represented the amount that servicemembers would have to pay out of pocket if their actual housing costs exactly matched the median local housing cost for their pay grade. In 2000, the deduction was 19 percent of the national median housing cost. DOD gradually reduced the deduction so that, by 2005, BAH rates equaled the median housing cost for each area and pay grade. Furthermore, while the housing allowance is calculated on the basis of the rental market, servicemembers may choose to apply their allowance toward purchasing a home, and they are free to spend more or less than their allowance on housing. We reported in April 2006 that the increases in BAH rates had made it possible for more servicemembers to afford private housing in the local market, thus reducing the need for privatized housing at installations. This has recently contributed to lower-than-expected occupancy rates at some privatized housing projects. If some privatized projects persistently experience lower-than-expected occupancy rates, they could encounter financial difficulties or, at worst, failures. To avoid such concerns in future privatization projects, we recommended that DOD determine how increased BAH rates would affect installations’ housing requirements and provide guidance on how the services should incorporate this information into their assessments of the need for privatized family housing. The National Defense Authorization Act for Fiscal Year 2002 authorized a new BRAC process in 2005. This was the fifth such process in the last two decades, but the first since 1995. As in previous processes, Congress enacted the legislation to close unneeded bases and realign others. On November 9, 2005, Congress accepted in their entirety the most recent BRAC recommendations for base closings and realignments. DOD has 6 years, or from 2005 until September 15, 2011, to implement these recommendations. The 2005 BRAC process affects a substantial number of communities surrounding installations that are expected to experience considerable growth in military personnel. While scores of installations will gain or lose military personnel, more than 20 installations each are expected to gain between 2,000 and 21,000 military, civilian, and mission-support contractor personnel. For the most part, installations with the largest gains are located in predominantly urban counties. However, some installations are in rural areas that may have less housing available, raising the possibility that incoming personnel initially could face a shortage of nearby housing that is affordable to them. The installations that will gain the most personnel through BRAC are Department of the Army installations, with their gains attributable to actions such as the consolidation of various activities and the return of personnel from overseas locations under DOD’s integrated global presence and basing strategy. In addition to shifts related to BRAC, the Army is realigning personnel as it changes its force structure. Various HUD, USDA, and IRS rental housing programs are intended to make housing affordable for lower-income renters. None of the federal agencies that administer these programs maintain data on the number of participating servicemembers. The programs either support the production of new or rehabilitated rental housing for eligible families or subsidize tenants’ rents to make existing units affordable (see table 1). Specifically: Among the production programs, LIHTC and Section 515 Rural Rental Housing Loans require property owners to restrict the rents that eligible tenants pay. The rent on each tax-credit unit generally cannot exceed 30 percent of the applicable income limit, adjusted for the number of bedrooms. Tenants pay 30 percent of their adjusted incomes toward the rent on Section 515 units. The tax-exempt multifamily housing bonds program requires units to be set aside for eligible families, but the rents on these units generally do not have to be restricted. Rental assistance programs make payments to property owners to make up the difference between an eligible tenant’s rent contribution (generally, 30 percent of adjusted monthly income) and a unit’s total rent. The Housing Choice Voucher program offers tenant-based rental assistance that tenants can use to rent privately owned apartments or single-family homes, and that they can transfer to new residences if they move. In contrast, the project-based Section 8 and Section 521 Rural Rental Assistance programs offer project-based rental assistance, which is attached to specific properties and is available to tenants only when they are living in units at these properties. Public housing also subsidizes tenants’ rents. However, rather than making rental assistance payments to owners that are keyed to tenants’ rent payments, HUD provides public housing agencies with annual operating subsidies that are based partly on the property’s projected overall rental income. All of these federal programs use a common definition of income as set out in a HUD regulation. Under this definition, incomes of servicemember households include all regular pay, special pay, and allowances (including BAH) of the servicemember, except special pay to servicemembers who are exposed to hostile fire. Each program determines households’ eligibility to apply by comparing their incomes with an income limit, expressed as a percentage of the area median. The income limits are adjusted for family size, with higher limits for larger families. In addition, the HUD and USDA programs use tenant income (with certain adjustments) to determine how much of a unit’s rent the tenant will pay. The programs generally target various categories of households, defined according to the relationship between a household’s income and the local area median income (AMI): extremely low (household income is no more than 30 percent of AMI), very low (no more than 50 percent of AMI), low (no more than 80 percent of AMI), and moderate (no more than $5,500 above 80 percent of AMI). In addition to these categories, the LIHTC and tax-exempt multifamily housing bond programs can target households with incomes that are no more than 60 percent of AMI. For purposes of this report, we focused on the 50 percent and 60 percent of AMI limits because they generally apply to new applicants for the two largest federal rental housing programs, Housing Choice Voucher and LIHTC. The federal rental housing programs are not entitlements and, as a result, do not assist all households that HUD has identified as having housing needs—that is, households with very low incomes that pay more than 30 percent of their income for housing, live in substandard housing, or both. According to HUD data for 2003, federal rental housing programs assisted an estimated 4.3 million households, or 27 percent of all renter households with very low incomes. Over 9 million renter households with very low incomes (about 59 percent) did not receive federal assistance and had housing needs. Of these 9 million households, over 5 million had what HUD terms “worst case” needs—that is, they paid over half of their income in rent, lived in severely substandard housing, or both. Assuming that the primary components of military pay were the only sources of servicemembers’ household incomes, excluding BAH payments from income when determining servicemembers’ eligibility for federal rental housing programs would have substantially increased the percentage that would have been eligible to apply for the programs as of December 2005. Specifically, most junior enlisted members would have been eligible for the programs, as would have much smaller percentages of senior servicemembers. In addition, although few in number, servicemembers with the largest families (nine or more persons) would have been somewhat more likely to be eligible for the programs than those with smaller families. However, to the extent that servicemembers’ households had income from nonmilitary sources, fewer of them would have been eligible for the federal programs. We lacked data on servicemember household incomes from nonmilitary sources, but at least 80 percent of the potentially eligible servicemembers were married, and income earned by spouses would likely have disqualified many of these households. Assuming that the primary components of servicemembers’ military pay were their only sources of household income in 2005, we found that by excluding BAH from income determinations, 19.9 percent of servicemembers of all grades would have been eligible for federal rental housing programs that used an income limit of 50 percent of AMI, compared with less than 1 percent of servicemembers with BAH included. Similarly, at the 60 percent of AMI limit, 39.3 percent of the servicemembers would have been eligible if BAH were excluded when determining income, compared with 4.8 percent if BAH were included (see fig. 2). At both income limits, most junior enlisted members (for our purposes, E-1 through E-4) would have been eligible for the programs if BAH were excluded. Specifically, at the 50 percent of AMI limit, substantial majorities of E-1s (92.4 percent), E-2s (78.7 percent), and E-3s (65.2 percent) would have been eligible. At the 60 percent of AMI limit, virtually all E-1s (99 percent) and E-2s (97.6 percent) and substantial majorities of E-3s (90.2 percent) and E-4s (64.6 percent) would have been eligible. In addition, using the same assumption that household income included only the primary components of military pay, some senior enlisted members and officers would have been eligible for the programs if income determinations excluded BAH. Specifically, at the 50 percent of AMI limit, 19.2 percent of E-5s and 9.4 percent of E-6s would have been eligible, as would have very small percentages of servicemembers in pay grades E-7 through E-9 (see fig. 2). The percentage of eligible officers also would have been very small, as follows: 1 percent using the 50 percent of AMI limit, and 2 percent using the 60 percent of AMI limit. Again assuming that the primary components of military pay were the only sources of household income, by excluding BAH from income determinations, considerable percentages of servicemembers with families of all sizes would have been eligible for the programs, using either the 50 percent or 60 percent of AMI limit. However, because the programs’ income limits increase with family size, servicemembers with larger families (although relatively few in number) generally would have been more likely to be eligible than those with smaller families (which were much greater in number). For example, with BAH in income determinations, 6.6 percent (59) of the largest families (those with nine or more persons) would have been eligible for programs using the 50 percent of AMI limit, compared with 0.5 percent (866) of the smallest (two-person) families (see fig. 3). With BAH excluded, 40.6 percent (361) of the largest families would have been eligible, compared with 23.7 percent (45,262) of the smallest families. The same general pattern held true for programs using the 60 percent of AMI limit. For example, 63.8 percent (568) of the largest families and 44.5 percent (84,999) of the smallest families would have been eligible if BAH were excluded from income determinations. To the extent that servicemembers had additional sources of household income, their actual eligibility for the federal rental housing programs would have been less than the percentages shown in our analysis. Additional sources of household income could include income on assets (such as savings accounts or mutual funds), employment of other household members, or types of military pay that we did not include in our analysis. For example, figure 4 shows that—at both program income limits and with BAH included in or excluded from income determinations—at least 80 percent of the potentially eligible servicemembers were married and, thus, could have had additional income earned by a spouse. In addition, at least 9 percent of the potentially eligible servicemembers received other types of military pay. To illustrate how additional sources of household income could affect eligibility for the federal rental housing programs, we calculated the amounts of additional income it would take to disqualify married servicemember households that would have been eligible on the basis of their military incomes alone. We found that, among the married servicemembers who were potentially eligible with BAH included in income determinations, income from even part-time, minimum-wage work by their spouses likely would have disqualified many from the federal programs. The same was true even if BAH were excluded from income determinations. For example, with BAH included, spousal income of $2,004 would have been enough to disqualify half of the married servicemembers that were potentially eligible for programs using the 50 percent of AMI limit (see table 2). With BAH excluded, spousal income of $4,044 would have been enough to disqualify half of the married servicemembers that were potentially eligible. At the 60 percent of AMI limit, $3,108 in spousal income would have disqualified half of the potentially eligible married servicemembers with BAH included in income determinations, compared with $6,180 if BAH were excluded. As shown in table 2, these amounts represent part-time work of 24 hours per week or less at the federal minimum wage. Agency officials and representatives from the four communities we examined described factors that may limit the role of federal rental housing programs in increasing the supply of housing or helping servicemembers afford existing housing, regardless of how BAH affects their eligibility. DOD officials said that servicemembers would be unlikely to need federal rental housing programs because BAH rates cover median local housing costs and would adjust annually to reflect any increases in market rents that resulted from increased demand for housing near growing installations. Yet, some community officials said that the LIHTC program could be used to build more affordable housing if more servicemembers were eligible. However, states would have to award tax credits to projects in these communities, and housing market factors—such as the financial feasibility of building market-rate units with rents that low-ranking servicemembers could afford—could affect developers’ interest in using the LIHTC program. Furthermore, although HUD and USDA programs could help some eligible servicemembers rent existing units, the programs are not entitlements; the limited availability of this rental assistance may preclude servicemembers from using the programs. Also, if more servicemembers applied for these programs, eligible lower-income civilians might face longer times on waiting lists. According to DOD officials, servicemembers would be unlikely to need federal rental housing programs to obtain affordable housing near growing installations because BAH rates cover local housing costs and would adjust for any increases in market rents that resulted from personnel gains. As of 2005, BAH rates fully cover the median local cost of housing at each installation. Officials noted that DOD’s recent initiative to reduce servicemembers’ out-of-pocket housing costs had resulted in substantial increases in BAH rates nationwide, including at the four selected installations we reviewed (Forts Benning, Bliss, Drum, and Riley). In addition, the officials said that, if increased demand for housing near a growing installation caused upward pressure on housing costs, DOD would adjust BAH rates upward as part of the annual rate-setting process, allowing servicemembers to obtain market-rate housing without additional federal assistance. However, if vacant units were not available in the communities immediately surrounding a growing installation, DOD officials acknowledged that some servicemembers might have to seek housing in outlying communities until the private market responded with new construction closer to the installation. Furthermore, the National Defense Authorization Act for Fiscal Year 2006 authorized the Secretary of Defense to prescribe temporary increases in BAH rates in disaster areas or areas that contain one or more installations that are experiencing a sudden increase in the number of servicemembers assigned to the installation. Specifically, a temporary increase in BAH rates would be based on the amount by which area housing costs increased because of the disaster or influx of service members and would apply until new rates for the next calendar year took effect. According to DOD officials, no installations had requested an increase in BAH rates because of installation growth, and the Secretary had not used this authority as of June 2006. If an installation requests a temporary increase in BAH rates because of installation growth, the officials said that DOD would review local market conditions to determine whether an increase was warranted. To varying degrees, officials in the four communities (near Forts Benning, Bliss, Drum, and Riley) that we examined described a need to build more private housing for incoming servicemembers. Some officials indicated that, under certain conditions, the LIHTC program could help address their anticipated housing needs. According to officials at the selected installations, expected gains in military personnel ranged from about 4,500 at Fort Benning to about 19,500 at Fort Bliss (see table 3). The rural installations—Fort Drum and Fort Riley—expected more substantial growth relative to their existing supply of housing than did the urban installations. The communities generally did not yet have precise data on the expected number of servicemembers that would be most likely to seek private housing (servicemembers with families and those in higher pay grades who do not have dependents) or required to live in barracks (servicemembers in junior pay grades who do not have dependents). However, community officials in the Fort Riley area estimated that at least 9,000 more housing units would be needed, considering both the estimated number of incoming military personnel and the expected growth in the civilian employment at the installation. Similarly, community officials near Fort Drum estimated the need for approximately 2,000 additional units. Community officials in the Fort Benning and Fort Bliss areas had not developed such estimates, but they also anticipated that some new construction would be necessary to accommodate installation growth as well as other population increases. Officials in some of these communities indicated that, under certain conditions, the LIHTC program could help address their anticipated housing needs. In particular, officials in the rural communities surrounding Fort Drum and Fort Riley said that the LIHTC program could help them build more affordable housing in response to installation growth, but only if more servicemembers would qualify to live in tax-credit units (see sidebar). Assuming that the primary components of military pay were the only sources of household income, modest percentages of servicemembers at Fort Drum and Fort Riley in December 2005 might have qualified for tax-credit units using the 60 percent of AMI limit even under the program’s existing income definition, but much larger percentages (about 37 percent and 26 percent, respectively) would have been eligible if BAH were excluded from income determinations (see fig. 5). In contrast, almost none of the servicemembers at Fort Benning and Fort Bliss would have been eligible under the existing income definition, and modest percentages (about 14 percent and 10 percent, respectively) would have been eligible if BAH were excluded from income determinations. The variation in servicemembers’ eligibility across installations reflected differences in the percentages of servicemembers in the lowest pay grades. Although these data, which pertain to personnel already located at these installations as of December 2005, do not indicate how many incoming personnel might be eligible to live in tax-credit units, they suggest that substantial percentages of those at the rural installations might become eligible if BAH were excluded from income determinations. In light of that possibility, community officials near Fort Drum and Fort Riley stated that excluding BAH could create opportunities to use the LIHTC program. Specifically: Community officials near Fort Drum indicated that some developers were interested in building new rental housing but faced obstacles in financing projects because of an estimated gap between current market rents, which incoming junior enlisted personnel likely could afford, and the higher rents that developers would need to charge to make new apartments financially feasible without government subsidies. The officials had been working with developers to seek financing assistance through state programs, including New York’s low-income housing tax credit program, which serves households with incomes up to 90 percent of AMI. However, because the state programs are relatively small, the officials said that increasing servicemembers’ eligibility for the larger federal LIHTC program would provide more financing options for developers. Community officials near Fort Riley noted that servicemembers make up a substantial portion of the current and expected future rental market in the area, particularly in the community of Junction City just outside of the installation. They said that while some developers of tax-credit projects have expressed interest in building more units in the area, they would only do so if the pool of potential tenants included more incoming servicemembers, because the demand for additional tax-credit units among civilian families is limited. However, even if BAH were excluded from income when determining eligibility and if developers proposed building tax-credit units, LIHTC-funded development might be limited near growing installations because the state agencies that award available tax credits have a variety of priorities. By law, each state must prepare an annual plan that identifies its criteria for distributing its allocation of credits among proposed developments. A state would have to weigh how a proposed property would address the housing needs near growing installations against the state’s priorities and selection criteria. States must give preference to projects serving the lowest-income tenants and projects that would serve qualified tenants for the longest periods of time. The states’ selection criteria also must include other considerations, such as tenant populations with special housing needs. For example, the priority housing needs in Kansas’s plan for allocating tax credits in 2006 include projects in communities with populations of fewer than 5,000; preservation of housing with Section 8 or Section 521 project-based rental assistance; projects for special-needs populations, such as the homeless or people with disabilities; and projects whose units would offer below-market-rate rents. Projects addressing these priorities would receive extra points in the scoring process used to evaluate proposals. Furthermore, officials in the four communities described market factors that could influence whether developers would try to use the LIHTC program to build housing near growing installations. In general, developers would have limited incentive to compete for tax credits if conditions for building market-rate housing were favorable, such as in areas having a higher-income population. Generally, market-rate housing allows developers to charge whatever rents the market will bear, without other restrictions. In contrast, applicants for tax-credit financing must agree to limit the rents charged for tax-credit units for at least 30 years and must comply with other federal requirements for 15 years or risk losing the right for investors to claim the tax credits. Thus, developers might be less likely to propose new tax-credit units near a growing installation that expected to receive more senior servicemembers with relatively high incomes than near one that expected more junior members with relatively low incomes. For example: Aside from military students who would live on base, most of the incoming military personnel at Fort Benning are associated with the planned realignment of a training school with primarily senior-ranking personnel. Community officials said that because these personnel likely could afford to pay market rates for housing, they did not expect developers to focus on providing new housing through the LIHTC program. In contrast, on the basis of preliminary estimates from Fort Riley officials, roughly 45 percent of the servicemembers that would eventually be stationed there might be married members in pay grades E-1 through E-6. As of early 2006, the communities near Fort Riley had substantial market-rate development under way or in the planning stages. However, community officials anticipated that enough additional low-cost housing would be needed for servicemembers in these lowest pay grades to justify building tax-credit units for them (assuming they were to become eligible). In addition, developers might be more disposed to seek LIHTC financing in areas where the cost to build new housing was high relative to the incomes of junior enlisted members. For example: Whereas officials in the Fort Benning area expected that developers could build new market-rate housing within the price range that incoming servicemembers could afford, officials in the rural Fort Drum and Fort Riley areas stated that increasing construction, labor, and infrastructure costs could make new market-rate units too expensive for junior enlisted members or could make it difficult to secure financing for market-rate units. For example, the cost of bringing materials and, perhaps, workers into a rural area can contribute to relatively high development costs. Near Fort Bliss, El Paso city officials said that the LIHTC program might be an attractive financing alternative for developers if they could not otherwise build housing that servicemembers with the lowest incomes could afford. However, the officials did not yet know whether developers might need subsidies. They planned to study the issue by considering the expected incomes of servicemembers who would be arriving at Fort Bliss, the supply and price of existing housing, and the development costs and rents that would be charged for new market-rate housing. Even if more servicemembers were to become eligible for HUD and USDA rental housing assistance programs, waiting lists for units and the limited availability of large units might limit servicemembers’ participation in these programs, according to officials from HUD, USDA, and the four selected communities. Rather than financing new rental housing near growing installations, HUD’s Housing Choice Voucher, public housing, and project-based Section 8 programs and USDA’s Section 515 and Section 521 programs primarily would help servicemembers rent existing units if they obtained the programs’ assistance, typically by making up the difference between their required contribution (generally 30 percent of adjusted monthly income) and a unit’s total rent. However, these programs are not entitlements, and many of the HUD, USDA, and community officials said that the limited number of units or limited supply of rental assistance may deter eligible servicemembers from applying for these programs, especially in areas with long lists of applicants already awaiting assistance. If they did join the programs’ waiting lists, servicemembers might find other private, military-owned, or privatized housing; relocate to a different installation; or become ineligible for the program because of a promotion before they rose to the top of a list. In all four of the communities we reviewed, the Housing Choice Voucher and public housing programs had waiting lists, with times ranging from a few months to 2 years, according to officials from HUD field offices and the housing authorities that maintain the lists. For example, in Columbus, Georgia, near Fort Benning, the waiting list for vouchers was long enough that it was closed as of March 2006 and was not expected to open to new applicants until 2008. In addition, servicemembers with large families may face obstacles to using rental assistance programs because of the limited availability of units with three or more bedrooms, according to some HUD, USDA, housing authority, and installation officials. In the four communities, properties with project-based Section 8 assistance and public housing developments offered relatively few units with three or more bedrooms, thereby limiting the options for families of five or more persons. For instance, in the Fort Drum area, of 690 project-based Section 8 and public housing units intended for families, 172 had three bedrooms and 43 had four bedrooms; the remaining 475 had fewer than three bedrooms. Similarly, although voucher recipients can seek housing in the broader private rental market, some of the HUD field office officials noted that larger families could have a hard time finding a sufficiently sized apartment or house that would meet the program’s quality and cost standards. If servicemembers did join the programs’ waiting lists, HUD headquarters and field office officials noted that housing authorities could adopt preferences that would reduce servicemembers’ wait for vouchers or public housing, but some officials said that such a step could be controversial. The housing authorities that administer vouchers and public housing developments may establish local preferences for selecting families from waiting lists, on the basis of local housing needs and priorities. However, HUD and housing authority officials said that such preferences—for example, for victims of domestic violence or a single homeless person—have sometimes met opposition from those who would face longer waits because they did not qualify for these preferences. Similarly, some HUD officials said that increasing servicemembers’ eligibility for the programs or giving them preference on waiting lists could create tensions with lower-income civilians who might have to face even longer waiting periods for rental assistance as a result. On the basis of the 2006 pay rates for the primary elements of military compensation, servicemembers in all pay grades would have substantially more income than most existing Housing Choice Voucher recipients, even without their BAH payments (see fig. 6). In light of that difference in incomes, some of the officials also cited potential concerns about balancing any advantages for servicemembers with the programs’ current emphasis on targeting assistance to households with extremely low incomes. As some military installations gain servicemembers, nearby communities face opportunities for growth as well as potential challenges in providing an adequate supply of housing that incoming servicemembers can afford. Many of these incoming personnel may not have problems finding housing they can afford—for example, junior enlisted members without dependents generally live in barracks; DOD has the ability to raise BAH rates for other servicemembers to reflect any increases in housing costs near the growing installations; and many servicemembers may have additional resources, such as spousal income, that they can put toward housing costs. Where communities lack enough housing for incoming personnel or where rents are expensive for married junior personnel, federal rental housing programs might help provide affordable housing for servicemembers through the production of additional housing or through rental assistance for existing housing. By excluding BAH from servicemembers’ incomes when determining eligibility, many of the lowest-ranking servicemembers could qualify to apply for these programs. However, the effects of such a change are uncertain and could involve trade-offs that warrant attention. For example, the LIHTC program (or, similarly, tax-exempt multifamily housing bonds) could help increase the supply of affordable rental housing for incoming servicemembers, if more of the members were eligible to live in tax-credit units. However, even if more servicemembers were eligible, the extent to which the LIHTC program would play a role in increasing the supply of affordable housing near growing installations would depend on local housing market conditions, the income distribution of incoming servicemembers, and the decisions of state agencies regarding whether to allocate tax credits to projects near growing installations or to projects that might address other state housing priorities. Furthermore, the rental assistance programs are not entitlements and already do not assist all eligible households. While some servicemembers might be deterred by the prospect of a lengthy wait from applying for HUD and USDA rental assistance for existing units, those who did apply would expand the pool of those waiting for a limited supply of available assistance. Thus, making more servicemembers eligible by excluding BAH from income determinations could cause these programs to serve more servicemembers at the expense of eligible civilians. If the primary intent of excluding BAH from income determinations for federal rental housing programs is to help increase the supply of rental housing that servicemembers with the lowest incomes could afford, Congress should consider first applying such a change only to programs intended to stimulate production of such housing, such as LIHTC and tax-exempt multifamily housing bonds. We provided a draft of this report to DOD, HUD, IRS, Treasury, and USDA for their review and comment. Treasury and USDA did not comment on the draft report. DOD, HUD, and IRS provided technical comments, which we incorporated where appropriate. DOD also provided comments in a letter from the Acting Deputy Under Secretary for Military Personnel Policy (see app. II). DOD commented that BAH does an excellent job of achieving the objective of providing servicemembers with the same quality and quantity of housing that their civilian counterparts can afford. However, DOD also noted that servicemembers may have difficulty finding adequate housing if there are substantial changes in the supply of or demand for housing in a local area, at least until the private market has had time to adjust to the changing conditions. DOD also observed that servicemembers with large families, who seek larger housing than an average size family, may have difficulty finding adequate housing using their BAH payments alone and may apply for federal rental housing programs. However, DOD also stated that servicemembers should be eligible for federal housing subsidies under the same terms as their civilian counterparts. Furthermore, DOD commented that excluding BAH from income determinations might transfer existing scarce resources from low-income civilians to the military and generate ill-will among civilians toward the military. Finally, DOD stated that, while our draft report showed that excluding BAH from income determinations might not have the desired effect of increasing the supply of rental housing for servicemembers, there might be other ways in which the government could assist the private market in responding to housing shortgages. Our draft report discussed the particular difficulties of large families—even those receiving rental assistance—in finding suitable housing. The draft report also addressed the potential role of existing programs, particularly the LIHTC program, in stimulating production of affordable housing near growing installations. However, examining other possible federal strategies for increasing the supply of private housing was beyond the scope of this study. We are sending copies of this report to other interested congressional committees; the Secretaries of the Departments of Agriculture, Defense, Housing and Urban Development, and the Treasury; and the Commissioner of Internal Revenue. We will make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please call me at (202) 512-8678 if you or your staff have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix III for key contributors to this report. Our objectives were to determine (1) how excluding the Basic Allowance for Housing (BAH) from income determinations would have affected the eligibility of servicemembers receiving BAH as of December 2005 and (2) programmatic and market factors that could affect eligible servicemembers’ participation in the programs in selected communities gaining military personnel. The federal rental housing programs in our scope include the Department of Housing and Urban Development’s (HUD) public housing, Housing Choice Voucher, and project-based Section 8 programs; the Department of Agriculture’s (USDA) Section 515 Rural Rental Housing Loans and Section 521 Rural Rental Assistance programs; and the Low-Income Housing Tax Credit (LIHTC) and tax-exempt multifamily housing bond programs, which are jointly administered by the Internal Revenue Service (IRS) of the Department of the Treasury (Treasury) and the states. To determine how excluding BAH would have affected the eligibility of active-duty servicemember households receiving BAH as of December 2005, we compared the incomes of servicemembers who received BAH in December 2005 with the federal rental housing programs’ income limits in effect at that time. We obtained personnel and pay data from the Department of Defense’s (DOD) Active Duty Personnel Master and Active Duty Pay files for December 2005. We analyzed data on 702,975 active-duty servicemembers who received BAH payments that month. We calculated servicemembers’ annual incomes on the basis of their December 2005 payments for basic pay; BAH; Basic Allowance for Subsistence; and, where applicable, a cost-of-living adjustment for servicemembers in certain high- cost areas. We assumed that these elements of military pay were servicemembers’ sole sources of household income because data on other income sources, such as spousal income, were unavailable. However, we analyzed how other sources of income might have reduced servicemembers’ eligibility by calculating the median additional annual income needed before married servicemembers’ families would have exceeded the income eligibility limit. To assess the reliability of the data used in our analysis, we interviewed DOD officials who were familiar with the data, reviewed relevant documentation, and tested the data for missing and apparently erroneous values. DOD provided data on 708,548 active-duty members of the Departments of the Army, Navy and Air Force and the Marine Corps. On the basis of our tests of these data, we excluded 5,573 (about 0.8 percent) records because we could not match the servicemembers’ zip codes to the geographic areas for which income limits were defined, because data on family size were missing, or because anomalies in the monthly pay data prevented us from calculating an annual income amount. As a result, our servicemember population was 702,975 for this analysis. We concluded that these data were sufficiently reliable for our purposes. Nonetheless, our analysis was limited because it provided a snapshot of servicemembers’ potential eligibility to apply for the programs on the basis of their incomes in December 2005. We could not predict the effects of a future change in income determinations, because potential changes in servicemembers’ incomes or duty locations and annual adjustments to programs’ income limits would also affect eligibility. To determine the programmatic and market factors that could affect eligible servicemembers’ participation in the programs in selected communities gaining military personnel, we interviewed and reviewed relevant documentation from military installation officials; rental housing program officials (including officials from HUD and USDA field offices, public housing authorities, and state housing finance agencies); and local government or community organization representatives in four communities near installations that will gain military personnel as a result of the 2005 base realignment and closure (BRAC) process or other military initiatives. We selected Fort Benning, Georgia; Fort Bliss, Texas; Fort Drum, New York; and Fort Riley, Kansas. We selected these installations on the basis of their expected net gains of military personnel and preliminary information indicating that the surrounding communities had initiated planning to address the housing needs of incoming servicemembers. We also sought a balance between urban and rural locations. Our selection of four Army installations reflects that this service generally expected the largest personnel gains. We visited the Fort Riley, Kansas, area and contacted the other areas by telephone. We cannot generalize the information from these four installations to all installations that will gain military personnel. In addition to our local contacts, we also discussed factors that affect the use of federal rental housing programs with headquarters officials at the Army, DOD, HUD, USDA, IRS, and Treasury. We conducted our work in Washington, D.C.; Arlington, Virginia; and Junction City and Manhattan, Kansas, between November 2005 and July 2006, in accordance with generally accepted government auditing standards. In addition to the contact named above, Steve Westley, Assistant Director; Julianne Stephens Dieterich; Alison Martin; Bettye Massenburg; John McGrail; John Mingus; Marc Molino; David Pittman; and Barbara Roesmann made key contributions to this report.
Although the Department of Defense (DOD) pays active-duty servicemembers who do not live in military housing a Basic Allowance for Housing (BAH) to help them afford private market residences, expected growth at some military installations has raised concerns about whether nearby communities will have enough affordable rental housing for incoming personnel. In response to a congressional mandate, GAO assessed (1) how excluding BAH would affect servicemembers' eligibility to apply for federal rental housing programs and (2) factors that could affect their use of the programs in selected communities gaining military personnel. GAO compared servicemembers' eligibility for the programs as of December 2005 by including and excluding BAH from income determinations and examined factors affecting potential program use near four growing military installations. Excluding BAH from income determinations for federal rental housing programs would have substantially increased the percentage of servicemembers eligible to apply for the programs as of December 2005, assuming military pay was their only income. To be eligible to apply for rental assistance programs of the Departments of Housing and Urban Development (HUD) and Agriculture (USDA), or to live in units produced by the Internal Revenue Service's (IRS) Low-Income Housing Tax Credit program, households must have incomes at or below a specific limit, generally 50 percent or 60 percent of the median household income for their area. At the 50 percent income limit, 20 percent of servicemembers who received BAH would have been eligible if BAH were excluded from income determinations, compared with 1 percent with BAH included. Most junior enlisted members would have been eligible if BAH were excluded, as would have small percentages of senior personnel. However, at all levels, many would not have been eligible if their households had even modest income from other sources. Agency and community officials cited factors that could limit the role of federal programs in building housing or helping servicemembers afford existing units near four installations that GAO examined. DOD officials said that servicemembers would be unlikely to need the programs because BAH payments provide for the median cost of market-rate housing. Some community officials said the tax-credit program, which spurs housing production, could be useful if more servicemembers qualified. But developers would have to compete for tax credits, and market factors--such as the financial feasibility of building units that junior enlisted members could afford--could limit their interest. The HUD and USDA programs might help some servicemembers rent existing units, but--because the programs are not entitlements--servicemembers could face lengthy waits, and eligible civilians might wait longer for assistance.
The Department of Education annually administers data collections to gather information from states about elementary and secondary education programs receiving federal assistance. When it administers a data collection, Education, like most federal agencies, is required to follow the provisions of the Paperwork Reduction Act (PRA) in order to maximize the utility of information to the federal agency and minimize the level of burden incurred by the states and agencies from whom it solicits the information. Traditionally, the department’s program offices, which have responsibility for the administration and oversight of federal education programs, have developed and operated similar data collections independent of one another, in a continuous year-round process. In addition, much of the data requested from states has been focused on compliance and procedural matters, and overlooked performance and the impact of programs in the classroom. Moreover, the collection of this data has been complex and prone to error, given that it typically passes from about 94,000 public schools to more than 14,000 school districts and then to state education agencies before Education receives it. Collecting data can be both time-intensive and costly. Education estimated that in 2004, for example, that states spent approximately 45,000 hours and nearly $1.2 million responding to the department’s requests for certain elementary and secondary education data. (See fig. 1.) Data collections are costly for Education also. Over $5 million was spent in 2004 administering certain data collections that included allocating federal funds for both the staff to administer the collections and in many instances for contractors to analyze these data. Initiated in 2002, the Education’s PBDMI has four goals: to improve the quality of the data Education collects about elementary and secondary education in terms of accuracy, consistency, and timeliness; to reduce the burden that states incur in reporting data to the department; to improve the focus of data analysis on program performance; and to improve Education’s data-sharing relationship with the states. While this initiative is not the department’s first attempt to overhaul the way it collects data, it nonetheless represents a fundamental change to its data management in that it is agencywide as opposed to program specific. As envisioned, the new collection would consolidate 16 separate collections heretofore conducted by seven program offices. Given the additional reporting effort that development and testing of the system would require of states, Education sought and received OMB approval to collect data from the states through PBDMI. (See table 1 for a list of the separate collections the PBDMI is designed to supplant.) In addition to defining the information to be collected, the initiative involves the development of a Web-based, data exchange network that will provide states and others with the ability to submit school-based data into one unified system to be stored in a data repository. The network will comprise three separate, but interrelated systems—the first system, the submission system, developed in late 2004, is used to collect data from states, check data for quality, and store the data in the data repository. The second system, the survey tool, which was also developed in 2004, enables Education to collect supplemental data from states and others that is also stored in the data repository. The third system, the data analysis and reporting system, which is not yet operational, will allow users (i.e., program office staff and the public) to among other things, query the data repository to analyze retrieved data and generate ad hoc reports. Education envisions that states and school districts would be able to use the data to assess their own program performance while also providing an opportunity for them to verify the quality of data submitted through the system. Figure 2 depicts the system design for the data network. Education had originally planned to have all components of the data exchange network fully operational in the spring of 2005 following the completion of key activities, such as (1) defining the data to be collected through in-depth consultations with department program offices and with state data providers,(2) populating the database with school-based data submitted by the states so that the quality of the stored data can be checked, and (3) training program staff on how to use the new network. PBDMI’s efforts to define what data were to be collected included forging agreements among Education’s individual program offices about which data would be essential to administration and oversight, particularly as performance indicators, and also developing common definitions for those elements that had been redundant. As a collaborative project, this involved developing consensus and receiving feedback from many parties–program offices, state policymakers and data providers, and organizations that develop data standards in the field of Education. Within the department, the office responsible for the day-to-day work of the project and for ensuring its success is the Strategic Accountability Service, which also has responsibility for developing and disseminating agencywide performance indicators. However, a number of other offices and boards within the department have been charged with providing oversight and guidance: a steering committee convened to share information on the development of the initiative consisting of the PBDMI managers and other senior officials within the participating program offices, the Chief Information Officer (CIO), a data information working group, and Education’s investment review board. The data information working group, which is headed by Education’s CIO, has responsibility for ensuring the consistency and quality of new data collections and for facilitating the integration and sharing of information between program offices. The department’s investment review board has overall responsibility for reviewing and approving and prioritizing department investments in technology, including the new network. As voluntary participants, stakeholders such as data coordinators from each of the 50 state education agencies, the District of Columbia, and Puerto Rico were provided with opportunities to give their input and feedback on the development of the initiative. The Education Information Advisory Committee established by Council for Chief State School Officers facilitates this exchange. Figure 3 depicts the various groups involved in the initiative. Once departmental data requirements were identified, Education planned a series of data collections to be followed by extensive testing of the quality of that data by the program offices. Specifically, Education planned to have states submit the newly defined data for the 2002-2003 and 2003- 2004 school years. (States would voluntarily make these submissions to PBDMI while also maintaining their current multiple reporting obligations under Education’s program offices.) In conjunction with the program offices, PBDMI officials then anticipated validating and verifying the quality of the new data submitted using a number of checks and evaluations. Also at this time the development of the system that staff would use to analyze data and generate reports was to be finalized. Once these activities were completed, the program offices were to assess whether the new system would be an adequate substitute for their existing data collections. Education has projected that it would spend just over $30 million through 2005 and initial estimates indicate that the data network will cost— beginning in 2006—just over $4 million annually to maintain. See figure 4 for project time frames and projected costs through 2009. Education officials spearheading PBDMI told us they have made progress defining the data to be collected. To do this, project officials worked with the program offices to identify their existing data needs. They also worked with program offices to translate these needs into performance-related data, such as math and reading achievement scores for different groups of students. Officials told us they had eliminated data elements collected by the program offices that are more indicative of process than performance. PBDMI officials encouraged program offices to identify performance- related data by using requirements specified in laws such as the No Child Left Behind Act and using the goals in the department’s strategic plans. PBDMI officials also worked with the program offices to reach agreement on common definitions for the data elements selected and to eliminate redundancy. For example, some programs needed information on charter schools, and PBDMI officials coordinated efforts within the department to develop one standard definition for them. The end result of these efforts is a unified body of data elements that includes definitions for each of the data elements and identifies the program with primary stewardship over decisions about that element. According to one department official managing the initiative, this collection will improve the quality of the data by assuring more consistency in what states provide. Although PBDMI officials reported progress in identifying performance- related data and establishing common data definitions, project officials have not fully documented these achievements by establishing a baseline and thus cannot be certain of the full extent of the progress made toward achieving their goal to enhance the department’s focus on outcomes and accountability. For example, while PBDMI officials were able to provide a list of 161 data elements focused on performance they were unable to provide us with a comprehensive list of “process-oriented” elements that had been eliminated. Similarly, while PBDMI managers reported that the program offices had agreed to definitions for the bulk of the data elements—one official estimated that they reached agreement for about 90 percent of the data—they could not provide us with a complete list of redundant elements that had been eliminated or those that remain because they had not tracked them. While PBDMI officials could not provide a full list of disputed data elements, they reported that some differences still remain among program offices. Although PBDMI officials encouraged the use of strategic plans and statutory requirements to justify the selection of performance-based data, they told us that program offices had final say over what data to collect. For example, one office uses similar although somewhat broader criteria that allow it to collect “data that can be reliably obtained from states or that Education has a documented need for.” Additionally, according to initiative officials, some differences remain due to differences in legislative requirements for the particular programs, while others resulted from preferences of some offices to continue using the same definitions as in the past. Officials responsible for carrying out the PBDMI told us they were unable to reconcile all differences. Officials told us they were working with the program offices to reach agreements, but said the programs maintain primary control for defining their data needs and would make final decisions. Additionally, we were told by Education’s CIO, who is required to review all data collections and who has a primary role within the Data Information Working Group, that this office does not have a role in resolving data disputes between program offices in order to ensure uniformity. However, an official also said that any differences that could not be resolved between the program offices would ultimately be arbitrated at the assistant secretary level within Education. PBDMI officials have conducted extensive outreach to the states to help unify their data definitions and upgrade their collection and submission systems. State data providers responding to our survey expressed general satisfaction with the department’s outreach. However, the majority thought that the burden of data collection and reporting would either increase or remain the same with implementation of the PBDMI. In addition, less than half expected the initiative to improve their ability to conduct their own in-state analyses only somewhat. Despite the extensive outreach, the states were not able to produce enough data during test submissions in 2002-2003 and again in 2003-2004 for the department to validate its quality and consider phasing out its standing collection systems. In order to ensure that states could meet Education’s requests for quality data required as part of PBDMI, officials conducted extensive outreach to state agencies, their data providers, and to data standards organizations. After Education developed its body of data elements, it consulted in 2002 with a task force consisting of a small number of state data providers to advise the department on the availability of the data it intended to collect. The department then conducted site visits beginning in April 2003 to 50 states, the District of Columbia, and Puerto Rico to obtain feedback on the ability of states to provide needed data and to prepare for testing the states’ ability to submit data. Education officials said they also made $50,000 grants to all 52 states to offset costs of overhauling information systems or obtaining additional staff. At the culmination of these visits, Education originally planned for states to transmit 2002-2003 school year data that could be tested for quality. However, Education scaled back the scope of this first data collection after recognizing that states would not, as yet, be able to offer certain types of data, such as data needed to meet requirements of the NCLBA. Consequently, Education delayed its plans to assess the quality of the data states submitted and focused instead on the ability of states to electronically transmit as much PBDMI data as they could to the department. Also, Education decided to remove from PBDMI’s prospective collection some data elements that states reported were not available at that time. Under this transmission pilot test, 50 states, including the District of Columbia and Puerto Rico, were able to submit some data to Education demonstrating that PBDMI was technically feasible. After establishing this technical feasibility, Education began preparing in 2004 for its data collection of 2003-2004 school year by providing additional outreach to the states. Project officials conducted a second round of site visits beginning in April and provided further guidance to help states align their data definitions with PBDMI standards. By aligning definitions with PBDMI, Education attempted to minimize possible confusion about what data to submit and when, further assisting the department’s efforts to improve data quality. Department officials have said that establishing a unified body of data elements across the department and states—so that all involved parties use the same “language” when analyzing and sharing data—is a priority. Education officials attribute the lack of quality in the data it currently collects from states and others to a variety of reasons, such as the lack of common data definitions that developed over time in response to the specific information needs of the program offices and data requirements arising at the state level. Officials with the initiative also conducted a limited number of quality assessments of state information systems to identify better ways of collecting and reporting data to the department. To serve states on a broader scale, Education conducted regional meetings, providing them with updates and feedback on the progress of the initiative. Officials also established a call center to answer states’ questions about the data to be submitted. Most states also received another $50,000 in grants for their continued participation in the initiative. Education began collecting 2003- 2004 school year data in November 2004. To increase the likelihood that its definitions would be adopted by states and other data providers, PBDMI officials also collaborated with advocacy groups that establish data and influence the development of technical standards. For example, PBDMI officials contracted with the Council of Chief State School Officers to coordinate PBDMI conferences, help states prepare and submit data, and provide feedback as PBDMI developed data definitions. Education also collaborated with the Schools Interoperability Framework, a group that develops data-sharing standards and software primarily designed for schools and districts. By working with the Schools Framework, Education officials said they could improve data quality by increasing the likelihood that departmental definitions and other standards would be incorporated into software used by schools and districts. This interaction with the Schools Framework is Education’s primary attempt to deal with the long-standing problem of poor data provided by schools and districts. (See table 2 for a list of some of Education’s outreach activities.) States were generally satisfied with Education’s outreach activities. (See table 3.) Most state data providers—72 percent—rated Education’s site visits effective in improving the partnership with the states. One state data provider characterized his exchanges with the department as open and non-defensive, and further reported that the department had been responsive. More than half rated as effective or very effective Education’s technical assistance (57 percent) and regional meetings (52 percent). While most states thought Education’s activities to improve its partnership with states were effective, some suggested areas for improvements. For example, 72 percent thought the site visits provided only some or little information on successes achieved in other states. In their survey responses half of the states expressed the view that reducing their reporting burden was the most important PBDMI goal; however, fewer than a third of the states said they believe the initiative will do so. (See table 4.) Some states emphasized their burden had increased in the short term as they continued dual reporting in order to meet the still ongoing data collection requirements of the program offices. Three states reported to us their cost estimates of systems development projects needed to support PBDMI, which ranged from approximately $120,000 to as much as $5 million. Moreover, about 75 percent of the states reported that they thought the burden to collect data would remain the same or increase once PBDMI was implemented. Some state respondents expressed the opinion that until there is a firm commitment by Education to halt multiple data collections their reporting burden would not likely lessen. “We are asked from the federal government for more and more information…. opens the flood gate for more and more reporting,” noted one official, adding that it is currently “hard to see the benefit at this time.” Some states also had reservations about the benefits of PBDMI for evaluation. One respondent cautioned, for example, that support within his state had weakened because of the lack of perceived benefits. Only about 20 percent of states expected PBDMI to improve or greatly improve their analytic capacity—that is the ability to meet their own state reporting requirements, analyze program effectiveness, analyze student outcomes, and to compare outcomes within states. Their reasons varied. For example, five states reported that they would continue to use their own systems. A few elaborated that their own information systems allow more detailed analyses of state performance than the information to be collected through PBDMI. Additionally, an almost equal number of states saw PBDMI as an effective tool to inform stakeholders as not. Table 5 lists the extent to which state data providers expect PBDMI to enhance their analytical capacity in a variety of areas. As of June 3, 2005, only 9 states had submitted more than half of the requested 2003-2004 school year data, while 29 states had submitted less than 20 percent (see fig. 5). Although PBDMI officials said they will wait until August 2005 for states to submit the 2003-2004 data, they also acknowledged that many states would not be able to provide significant portions. The lack of state data is particularly acute in some programmatic areas. For example, many states have been unable to provide data on homeless and migrant students or students with limited English proficiency. States told Education officials early in the process that changes to state data collection processes, systems, and definitions would be needed to provide these types of information. We found that there were various reasons why states could not provide data. Some states reported that they wanted better documentation from the department in areas such as clarifying established data definitions and file format specifications needed to transmit data. States needed to make major modifications to their existing data collection and reporting processes in order to provide new information required by PBDMI. States also reported that they would not provide certain data elements that were inapplicable, hard to collect, or available elsewhere. Some also reported that there was still some confusion over multiple or unclear definitions. Department officials said that many states had initially overestimated their capabilities and that the data states said would be available differed greatly from what they have produced thus far. States have also noted competing demands for their time and resources stemming from NCLBA. Some states reported they lacked resources, such as staff and money, to implement changes specific to the initiative. Specifically, 56 percent of the state survey respondents said that all or a portion of the $50,000 in grants they received from Education were used to contract for additional personnel, a quarter of the states said that these funds were used to improve their information systems. Some states noted, however, that these funds were insufficient to make changes necessary for their participation in PBDMI. Recognizing that obtaining state data has been problematic, Education has recently developed a preliminary strategy for working more closely with states to ensure that it obtains 100 percent of data from all the states. While not finalized, Education is currently considering actions such as issuing regulations requiring states to submit PBDMI data and allowing those states that provide acceptable amounts of “high quality” data under PBDMI to be exempt from existing data collections. For example, states that submit data to PBDMI that are also currently collected through the Consolidated State Report—one of many data collections required under the NCLBA—would not have to submit the same data under this data collection. Officials have also tentatively proposed collecting data of lesser quality that are readily available and obtaining data through other systems to supplement what has been provided thus far. It is not clear the extent to which this proposal would undermine efforts to improve data quality and maintain program office buy-in. Another option under consideration at Education is to target departmental resources, such as $25 million in grants for system improvements from the Institute of Education Sciences, at states that actively participate in PBDMI. Education is proceeding with efforts toward full implementation of PBDMI—using the data for analysis and reporting—despite the limited amount of data collected. To do so, program offices decide whether the quality of the data (in terms of accuracy, consistency, timeliness, and utility) collected through PBDMI meets their needs. Once program offices validate the quality of the data, Education would begin to phase out existing data collections. Additionally staff will be trained on how to access and use the data collected to date. Originally Education expected to complete all of these activities by the spring of 2004. To the degree that it has been able to proceed, the department has developed a set of quality checks designed to ensure the accuracy and completeness of the data states submit. Nevertheless, two program offices, which as members of the seven principle offices included in the initiative and have a role in determining whether the data are accurate and complete for their purposes, expressed concern that PBDMI’s procedures to ensure data quality may not be adequate. An official in the Office of Special Education and Rehabilitative Services (OSERS), which has collected almost 30 years of longitudinal data about the effectiveness of the nation’s special education programs, told us that PBDMI had been provided with information about the nearly 200 data quality checks used in special education collections, but was not sure that PBDMI adopted them all. PBDMI officials said they adopted those that were universally relevant. Further, this official expressed concern that PBDMI would not meet its special needs. Specifically, unlike other program offices, OSERS programs bases student assessment on age as opposed to grade level attained. Additionally, this official was concerned about the timeliness of the data collected through PBDMI because that office generated a number of congressionally mandated reports at specific times of the year. Consequently, this office plans to compare the quality of its own data with the data collected through PBDMI. Officials in the Office for Civil Rights also expressed similar reservations with PBDMI’s administration of its large elementary and secondary survey of schools and districts used to assess compliance with civil rights laws and identify trends. Historically, district superintendents have responded to this survey in large enough numbers allowing Education to generalize on any findings with a high degree of confidence. However, when PBDMI administered the survey, fewer superintendents responded and, according to the Office for Civil Rights, PBDMI did not have a readily available plan that adequately outlined steps needed to raise the response rate. As of June 10, 2005, the response rate for this survey was lower than previous surveys. Final implementation has also been hampered by delays in training and delivery of the analysis and reporting system. Both are more than a year behind schedule. An official responsible for overseeing the training efforts told us they could not focus on the delays because considerable time was spent addressing state problems submitting data through PBDMI. The data analysis and reporting system is more than a year behind scheduled due to the lack of data and the failure of Education’s contractor to meet its scheduled delivery of the system. Education officials now expect to fully implement the system by March 31, 2006. In lieu of developing its data analysis and reporting system and training, PBDMI has offered presentations of these tools as a preview for staff to see the new system’s capabilities and to keep them apprised of the initiative’s progress. Despite the many obstacles confronting the PBDMI, Education officials said they expect to proceed with implementation of the initiative, albeit with some activities postponed. In August, project officials developed a preliminary strategy designed to address the problem of collecting data from the states, such as providing exemptions from certain reporting requirements for some states. However, this strategy has not been finalized, and Education has not developed a specific plan of action for how they might (1) help states that are deficient, (2) deal with state expectations for phasing out the multiple data collections, or (3) meet the expectations of their own program offices. The PBDMI represents an important step forward for the Department of Education in its efforts to monitor the performance of the nation’s elementary and secondary schools. By developing the ability to collect data that are more accurate, timely, consistent, and focused on key national performance indicators, Education will be much better informed to make its many policy and programmatic decisions. The initiative, by asking for a clearly defined set of information that is to be submitted only one time, has the potential to substantially reduce state reporting burden for elementary and secondary programs as well as to help states to develop better data systems. However, PBDMI is an ambitious and risky undertaking that requires the continued cooperation of a number of internal and external stakeholders. In order for PBDMI to be successful, the department must rely on states to provide new information at a time when they are busy implementing large new federal initiatives, such as the No Child Left Behind Act. While some states have been able to provide significant amounts of data, others continue to lag far behind. In order for PBDMI to be successful, it is important for all states to submit timely, reliable, accurate, and consistent data. Consequently, it is important for the department to have a clear plan for addressing states with problems providing data and to continue to provide a proper combination of support and incentives for states to participate. By having worked closely with the states on their collection systems, PBDMI officials have the information they would need to develop a plan of action to help move them forward. Because PBDMI represents a significant change in the way the Department of Education conducts business, it can only be accompanied effectively and efficiently by a change in management practices. However, program offices still retain much discretion over what data they will collect, how they will define it, and whether or not PBDMI’s data will meet their needs. While it is the initiative’s responsibility to make sure it collects data that meets the program offices’ requirements, PBDMI is also responsible for developing a data collection system focused on program performance and quality data. To the extent that programmatic differences, such as those over data definitions, inhibit PBDMI’s goals there should be a clear process for reconciling those differences. If PBDMI truly represents a new way of doing business, Education should be able to ensure that its organizational units go along. It is difficult to see PBDMI achieving its full potential without a clear process for furthering the initiative’s goals. Fundamental to any large, complex effort’s success is a well thought out plan that tracks its progress against a set of clearly defined and measurable goals. PBDMI has not put in place such a planning and tracking system. State governments and Education’s program offices have devoted much time, effort, and money participating in PBDMI with the idea that they would see benefits as a result. A lack of demonstrated progress and benefits potentially erodes state support, undermining the viability of this important initiative. Some states are already beginning to lose sight of the potential benefits of PBDMI. As the department goes past its original completion deadline, it is important for it to lay out a clear plan for how it will now proceed. To address the issues we have identified with regard to planning, decision- making, and improving data quality, we recommend that the Secretary of Education develop a strategy to help states improve their ability to provide quality data given the challenges that many states face in providing data; a clear process for reconciling differences between the program offices and the PBDMI oversight office to ensure that decisions critical to the success of PBDMI are made; and a clear plan for completing final aspects of PBDMI, including specific time frames and indicators of progress toward the initiative’s goals. We received written comments on a draft of this report from the Department of Education. Education agreed with our findings and recommendations and stated that it has devoted additional resources to the initiative and plan to issue a detailed project plan that outlines the steps needed to complete the initiative. These comments are reprinted in appendix II. Education also provided technical corrections and comments that we incorporated where appropriate. We are sending copies of this report to the Secretary of Education, the Office of Strategic Accountability Services, the Director of the Office of Management and Budget, and appropriate congressional committees. Copies will also be made available to other interested parties upon request. Additional copies can be obtained at no cost from our Web site at www.gao.gov. If you or your staff should have any questions, please call me at 415-904- 2272 or bellisd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The objective of our review of the Performance Based Data Management Initiative (PBDMI) was to assess the progress Education has made in its implementation of the initiative, particularly with regard to (1) defining what performance-related data it will collect from states on behalf of the program offices, (2) assisting states in their efforts to submit quality information, and (3) utilizing performance-related data to provide enhanced analytic capacity within the program offices. We conducted our review between April 2004 and September 2005 in accordance with generally accepted government auditing standards. To assess the department’s progress in each of these areas, we reviewed documents relating to the implementation of the initiative, relevant laws, and information provided by the office responsible for PBDMI—the Strategic Accountability Service (SAS) and others. We interviewed key staff responsible for the initiative as well as officials in each of the offices that are participating in PBDMI. We also interviewed senior-level Education officials to determine their role in the implementation of PBDMI. To gain insight into state perspectives on the initiative, we administered a Web-based survey to state officials responsible for providing these data to Education. We received responses from 50 states including Puerto Rico. We also interviewed a variety of external stakeholders, a data standards organization, and three contractors involved in the initiative, including an official from the Council of Chief State School Officers. We also reviewed previously issued reports by Education’s Office of the Inspector General (IG) as well as GAO reports and testimonies. In addition to interviewing departmental officials, we also reviewed documentation on the initiative to gain a better understanding of what actions Education was undertaking to implement the goals of the initiative, including its data quality contract, data dictionary, its business plans as well as justification reports to the Office of Management and Budget (OMB) required under the Paperwork Reduction Act to collect data. We also reviewed summary information about state performance data that was obtained as a result of site visits to states conducted in 2004 in order to analyze what data was obtained from states as a result of their efforts. Education provided information on states’ submission of requested data elements to PBDMI as of June 3, 2005. States were expected to provide data for 64 data elements ranging from dropout rates, student performance on reading, science, and writing assessments, teacher certification, and many others. For each of these elements, Education determined whether each state had submitted the information, had not submitted the data, or did not collect the information. We incorporated into our report Education’s calculated percentages of elements submitted for each state. We determined that these data were sufficient for the purposes of this engagement. In order to document the burden hours associated with certain elementary and secondary data collections, we accessed 14 data collection justifications authored by each of the department’s program offices and submitted to the chief information officer. These reports had received OMB approval or were seeking approval to collect data from states and others. We talked with an official responsible for maintaining these documents at the department’s Web site to verify that these were the most recent data available for analysis. From each document we obtained the estimated state burden hours and costs and federal administrative costs associated with each data collection. Each estimate was based on a formula that we adjusted to reflect these costs for the 52 states participating in the initiative. In some instances where an average was used, we assumed that the 52 states were similar in characteristics to the overall population of states included in Education’s estimates. However, we did not find it feasible to prorate the formulas for the federal administrative costs (based on 52 states) for each of the collections. A statistician verified each of the calculated estimates for accuracy. We also surveyed all 52 state data coordinators using a Web-based survey instrument in order to obtain their perspectives on various aspects of the initiative. Our survey instrument was developed based on information obtained during interviews with state data coordinators in Pennsylvania, Virginia, Washington, and Oregon. Additionally, other internal stakeholders specializing in technology and education were asked to review and comment on our draft survey instrument. The survey was pre- tested with Wyoming, North Carolina, and Illinois to determine if the questions were clear and unbiased and whether the terms were accurate and precise. We included these three states in our pretests because they varied in size and technical capacity for data transmission as determined by an earlier Education survey. Based on their comments, we refined the questionnaire as appropriate. Our final survey instrument asked a combination of questions that allowed for closed-ended as well as open-ended responses and included questions about state perspectives on PBDMI’s ability to achieve its goals. The survey was conducted using self-administered electronic questionnaire posted on the Internet. We sent e-mail notifications about the upcoming survey to all 52 state data coordinators (50 states, the District of Columbia, and Puerto Rico) on November 15, 2004, and activated the survey shortly thereafter. Each potential respondent was provided a unique password and username by e-mail to limit participation to members of the target population. To encourage respondents to complete the questionnaire, we sent an e-mail message to prompt each non-respondent approximately 2 weeks after the survey was activated and followed up by e-mail or phone with each non-respondent several times thereafter. We closed the survey on January 21, 2005, after the 50th respondent had replied. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the survey instrument, the data collection, and the data analysis to minimize these non-sampling errors. For example, a survey specialist designed the survey instrument in collaboration with GAO staff with subject matter expertise. Then, as stated earlier, it was pre-tested to ensure that the questions were clear, unbiased, and accurate. When the data were analyzed, a second, independent analyst checked all computer programs. Because this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire, eliminating the need to have the data keyed into a database, thus removing an additional source of error. In addition to the contact named above the following individuals made important contributions to this report: Bryon Gordon, Assistant Director; Carla Craddock, Analyst-in-Charge; Susan Bernstein; David Dornisch; Mary Dorsey; Kimberly Gianopoulos; Brandon Haller; Stuart Kaufman; Jonathan McMurray; Valerie Melvin; James Rebbe; Gloria Hernandez Saunders; Kimberly Siegel; Michelle Verbrugge; and Elias Walsh.
As a condition of receiving federal funding for elementary and secondary education programs, states each year provide vast amounts of data to Education. While the need for information that informs evaluation is important (particularly with the No Child Left Behind Act), Education's data gathering has heretofore presented some problems. It has been burdensome to states because there are multiple and redundant requests administered by a number of offices. In addition, the resulting data supplied by states has not been accurate, timely, or conducive to assessing program performance. To improve the information by which it evaluates such programs and also to ease states' reporting burden, Education in 2002 initiated an ambitious, multiyear plan to consolidate elementary and secondary data collections into a single, department-wide system focused on performance. Given its importance, we prepared a study, under the authority of the Comptroller General, to provide Congress with information on its progress. Through its Performance-Based Data Management Initiative (PBDMI), Education has consolidated and defined much of the data it anticipates collecting under a unified system. Education reports that many data definitions have been agreed-to and data redundancies eliminated. PBDMI officials also said that to date, however, it has not been able to resolve all remaining differences among the program offices that manage many of the different data collections. PBDMI officials have conducted extensive outreach to the states to advance the initiative. The outreach to states involved regional conferences, two rounds of site visits, and according to officials, $100,000 in grants to most states to help offset their costs. State data providers responding to our survey expressed general satisfaction with the department's outreach, but some were not optimistic that the initiative would ease their reporting burden or enhance their own analytic capacity. The states were not able to produce enough data during test submissions in 2003 and 2004 to enable data quality verification or phasing out the department's multiple data collections. With regard to the lack of sufficient data from many states, Education officials said some lack the technical capacity needed to produce new performance data requirements. State data providers reported having competing demands for their time and resources, given other federal initiatives. Education officials have decided to proceed with the undertaking and have developed a draft interim strategy for moving forward. But they currently have no formal plan for how they would overcome obstacles such as the lack of state data and other technical and training delays to the initiative.
The Commodity Futures Trading Act of 1974 (the Act) established the Commodity Futures Trading Commission (CFTC) as an independent agency to better enforce the Commodity Exchange Act and oversee and regulate what was at the time an increasingly complex futures markets. The Act requires the agency to simultaneously submit its budget request to House and Senate Appropriations and oversight committees. The Act also grants independent leasing authority to CFTC. As such, the CFTC is not required to obtain its space through GSA. Commodity futures’ trading has grown increasingly complex since its 19th century origins when agricultural commodities dominated the industry. During the 20th century, futures’ trading expanded to include greater diversity in commodities, such as metals, oil, and financial products, including stock indexes and foreign currency. Subsequently, the Dodd- Frank Act expanded CFTC’s regulatory jurisdiction to include the previously unregulated over-the-counter derivatives market, commonly known as the “swaps” market. CFTC, with a fiscal year 2015 budget of approximately $250 million, is responsible for administering and enforcing provisions of the Commodity Exchange Act, fostering open and transparent markets, and protecting futures markets from excessive speculation, commodity price manipulation, and fraud. The agency maintains four mission-related oversight divisions: Market Oversight: conducts trade surveillance and oversees trading facilities such as futures exchanges; Swap Dealer and Intermediary Oversight: oversees registration and compliance in the derivatives market; Clearing and Risk: oversees derivatives clearing organizations and other major market participants; and Enforcement: investigates and prosecutes alleged violations of the Commodity Exchange Act. In addition, the agency maintains several divisions related to functional operations and support in its four locations. CFTC closed two regional offices in Los Angeles and Minneapolis in 2003 and 2007, respectively. CFTC began planning to substantially expand leased space prior to the enactment of the Dodd-Frank Act and then entered into leases that did not make efficient use of limited government resources. CFTC lease costs vary compared to other federal leases in the same markets. With the exception of its Washington, D.C., headquarters, CFTC’s rsf costs are lower or about the same as lease costs among other federal agencies in the regional office locations. CFTC followed some elements of leading government-leasing practices; however, the agency lacked comprehensive policies and procedures to guide efficient and cost- effective decisions for lease procurement. As a result, CFTC currently has lease obligations for unused space that extend to 2021 and beyond. CFTC renewed leases and expanded space in its Washington, D.C., headquarters and three regional office locations, prior to receiving the funding necessary to hire staff to occupy the additional space. Anticipating the increased oversight that would result from regulating and monitoring the swaps market, CFTC began planning for the expansion of its leased space in the fiscal year 2009 time frame—more than a year before the enactment of the Dodd-Frank Act in July 2010. The resulting leasing decisions negatively impacted the CFTC’s space utilization and resulted in inefficient use of limited government resources. Federal standards for internal control call for agencies to identify and analyze relevant risks associated with achieving agencies’ objectives. According to these standards, management needs to comprehensively identify risks and should consider all significant interactions between the entity and other parties as well as internal factors at both the entity-wide and activity level, including considering economic conditions. Although CFTC’s leasing decisions from fiscal years 2009 to 2012 significantly increased its space, CFTC could not provide us with an analysis of risks related to these decisions. CFTC has incrementally amended leases and expanded space in its Washington, D.C., headquarters since first occupying the building in 1995, and in 2009, CFTC extended the lease that was set to expire in 2015 by 10 years (through the end of fiscal year 2025) and expanded its leased space by more than 78 percent from fiscal years 2010 through 2012. Similarly, CFTC amended existing leases and expanded space in its Chicago and New York regional offices in 2009 and 2011, respectively, and in 2011, CFTC relocated its Kansas City regional office to a larger space. As discussed below, these expansions resulted in reduced rates of occupancy and increased costs. As Table 1 shows, overall, the CFTC increased its leased space by 74 percent from fiscal year 2008 through fiscal year 2015. The greatest increase occurred in the Kansas City Regional Office where the volume of office space more than doubled. Also, during this period, the Kansas City Board of Trade closed and merged with the Chicago Mercantile Exchange. In the Chicago and New York regional offices, CFTC also increased leased space—adding approximately 20,000 square feet in Chicago and 22,000 square feet in New York. According to CFTC, the additional space currently gives the agency the capacity to accommodate 1,289 staff overall. The agency requested additional funding to cover its new regulatory responsibilities and, as figure 1 below demonstrates, in fiscal years 2009 and 2010 the CFTC was appropriated funding in excess of its request. Figure 1 also shows that this period was followed by 5 years of appropriations less than the amount requested. Therefore, the CFTC could not expand its staff at a rate that would allow for full utilization of the additional leased space. On average, the agency received about 109 percent of the funding it requested in fiscal years 2009 and 2010 and about 76 percent of its requests from fiscal years 2011 through 2015. Figure 1 illustrates that––while not always granting CFTC’s full funding request––Congress increased funding in nominal terms for CFTC every year from fiscal year 2008 through fiscal year 2015––with the exception of fiscal year 2013 when funding declined slightly. In other words, CFTC’s fiscal year 2015 appropriations represent an increase of nearly $138 million, or about 123 percent, when compared to fiscal year 2008. The amount CFTC allocated versus the amount it requested for staff follows a similar pattern. In fiscal years 2009 and 2010, based on its higher than requested appropriation, CFTC hired more staff than it had anticipated. In the following 5 fiscal years, with a lower appropriation, it hired fewer staff than requested. CFTC, on average, hired about 13 percent less staff than it originally requested between fiscal year 2008 and fiscal year 2015 (see fig. 2). CFTC federal employee staffing increased in absolute terms by about 53 percent from fiscal year 2008 through fiscal year 2015, according to CFTC data (see table 2 below). Moreover, CFTC greatly expanded the number of on-site contractors it employs, an increase of about 324 percent during the same period. According to CFTC, since commodity futures’ trading is increasingly electronic and data intensive, most of the CFTC’s on-site contractors are involved with operating and maintaining CFTC’s electronic data systems. This increase also reflects the $35 to $55 million Congress set aside for the purchase of information technology in the appropriations for fiscal years 2012, 2014 and 2015. When CFTC’s employees and on-site contractors are combined, aggregate agency staffing increased from 549 to 1006, or more than 80 percent, from fiscal years 2008 through 2015. This increase, however, falls below the approximately 1,289 positions for which the CFTC leased additional office space. As discussed below, expanding leased space before obtaining an appropriation to fund additional staff has resulted in substantial space underutilization, which increased the space allocation per CFTC staff member (including CFTC employees and on-site contractors). According to our analysis of CFTC data, the overall allocation of useable square feet (usf) per CFTC staff member in fiscal years 2008 through 2010 was 303 square feet on average. From fiscal years 2011 through 2015, the allocation increased to 465 square feet on average in contrast to the approximately 300 usf per employee noted in CFTC’s 2009 Program of Requirements, a space-planning document for all four of the agency’s office locations. The total space utilization for all four CFTC offices combined was about 78 percent at the end of fiscal year 2015. However, each office had differing levels of space utilization, as figure 3 below illustrates, according to our analysis of CFTC data. As figure 3 above illustrates, the Kansas City Regional Office is the most underutilized of the four offices with a staff of 31, including contractors, housed in space intended to accommodate 72. When we visited the Kansas City office, officials told us that CFTC vacated approximately a third of its leased space in response to the CFTC’s OIG recommendation that the agency take steps to dispose of underutilized property in that location, including subleasing or returning the space to the landlord (see figure 4 below). According to CFTC officials, the only effective option to cease paying for the vacant space in Kansas City involves negotiating with the landlord to return the space. The landlord agreed to try to lease the vacant floor; however, there has been limited interest thus far, and CFTC continues to pay rent on the vacant space. In our review of CFTC leases, we found that all of the leases include provisions for subleasing space. CFTC officials told us that the agency was only authorized to enter into subleases in circumstances where the sublease would further the purposes of the Commodity Exchange Act. According to CFTC, subleasing the space in a manner that furthers the purposes of the Act would, as a practical matter, be very difficult to accomplish. The CFTC’s OIG released additional reports in 2015 that found underutilized space in the Chicago and New York City Regional offices but not nearly to the extent found in the Kansas City Regional Office. The report on the Chicago Regional Office recommended better utilization of space. According to our analysis of CFTC’s data, space utilization in the Chicago office improved as CFTC increased the number of staff (including contractors) from 137 in fiscal year 2014 to 150 in fiscal year 2015. The Chicago office currently utilizes about 88 percent of its space (see fig. 3). With regard to the New York City Regional Office, the OIG recommended that CFTC sublet or negotiate returning the additional space it leased beginning in fiscal year 2012. Our analysis of CFTC data found that space utilization in the New York City office, similar to the Chicago office, also improved as staff increased (including contractors) from 80 to 91, or nearly 14 percent, from fiscal year 2014 to fiscal year 2015. When we visited the New York City office in January 2016, we observed vacant offices, some of which were unfinished, unventilated, and not adjacent to one another. As of the end of fiscal year 2015, the New York City office had a utilization rate of 68 percent (see fig. 3). CFTC officials said that they have notified the landlord that they would like to return some space on one floor, but the building currently has a vacancy rate of about 30 percent, so this space would likely be difficult to rent. According to CFTC data, combined lease costs for all CFTC offices reached about $20.6 million in fiscal year 2015—a 79 percent increase in nominal dollars over the combined fiscal year 2008 lease costs (see app. II for details on lease costs). All four of the CFTC office leases typically cover a period of 10 years. As such, the current leases will not expire until fiscal years 2021 through 2025. The Kansas City Regional Office lease will expire first in 2021, followed by New York in 2022, Chicago in 2022, and Washington, D.C., in 2025. According to CFTC, lease renewal planning typically begins about to 2 years in advance of lease expiration, so it is reasonable to expect CFTC to begin planning around fiscal year 2019. CFTC officials told us that they converted certain Tenant Improvement Allowances (TIA) provided under leases into rent abatements in order to reduce rent in 2011, 2012, and 2013. CFTC used TIA to complete improvements and alterations to the CFTC office space in Washington, D.C., Chicago, New York, and Kansas City, as well as to cover such costs as architectural expenses, furnishings, equipment, cabling, and moving expenses for the CFTC offices. In addition, under the terms of certain leases, unused portion of the TIA could be converted to rental abatement and then used to offset rental payments. For example, the Kansas City Regional Office lease sets the TIA at $35 per rentable square foot. According to our analysis, at this rate, $852,670 was available for tenant improvements, and for this particular lease, any amount not expended in the first 6 months was available as a rebate against the rent expense. As appendix II shows, CFTC used $78,222 of TIA in fiscal year 2013 as a rent credit. CFTC did not state how it used TIA for space planning. We compared CFTC’s lease costs for fiscal year 2013 through fiscal year 2015 to average lease costs of other federal agencies that lease through GSA in privately owned buildings in the four markets where CFTC has offices. We also compared the cost of private sector leases, as measured by the Building Owner Management Association (BOMA) for 2013 and 2014, a widely recognized industry association. As table 3 below shows, with the exception of the Washington, D.C., headquarters––where CFTC 2015 lease costs are about 18 percent higher than the average lease costs for federal agencies leasing office space through GSA––CFTC’s rentable square foot costs are lower or about the same as lease costs among other federal agencies in the regional office locations. More specifically, CFTC’s lease costs were lower than those of other federal agencies in Kansas City and New York and slightly higher than those of other federal agencies in Chicago (see table 3 below). As discussed previously, the CFTC began planning space expansion more than one year before the Dodd-Frank Act was signed into law and entered into leasing decisions in response to anticipated requirements of Dodd-Frank without fully assessing the risk of not receiving appropriations sufficient to execute its plans. According to CFTC OIG estimates, the failure to consider this risk has resulted in the agency possibly spending as much as $74 million for vacant space, if current conditions persist through the end of the current leases in fiscal years 2021 through 2025. Thus, the CFTC is not carrying out its mission in an efficient and cost- effective manner. Both CFTC’s guidance and GSA guidance share a common purpose: to maximize the value for the government while also fulfilling the agency’s mission. CFTC’s Statement of General Principles, which outlines the actual lease acquisition process, states a goal of maximizing competition to the extent practicable and making reasonable decisions to obtain space that enable the Commission to accomplish its mission in an efficient and cost-effective manner. Similar to CFTC’s Statement of General Principles, GSA’s Leasing Desk Guide states that it aims to help ensure that it leases quality space that is the best value for the government. However, CFTC’s guidance is very high-level and lacks the detail of GSA’s guide, which provides more comprehensive leasing policies and procedures. According to federal standards for internal control, policies and procedures help ensure that actions are taken to address risks and are an integral part of an entity’s accountability for stewardship of government resources. When we applied this standard, we found that CFTC’s policies did not include guidance to assess the risk of not receiving its full budget requests. CFTC has two documents, the 2009 Program of Requirements and the 2011 Statement of General Principles, that comprise its leasing guidance. The Program of Requirements, according to CFTC officials, is a space planning document for all four of its office locations. It provides information on projected employee and contractor staff size; and requirements for offices, workstations, common use areas and other space needs. Based on the Statement of General Principles, CFTC follows select portions of leading government guidance, and regulation that facilitate: maximizing competition to the extent practicable; avoiding conflicts of interest; adhering to the requirements of procurement integrity; and making reasonable decisions to obtain space that enables the Commission to accomplish its mission in an efficient and cost-effective manner. For example, CFTC officials told us that they followed select portions of leading government guidance when they began expanding space in 2009. Consistent with internal control standards, GSA’s guidance provides comprehensive details on ways to formulate, document, and operationalize lease procurement. For example, GSA’s Leasing Desk Guide specifically states that confirming space requirements includes verifying that the client has appropriate funding. By comparison, CFTC’s guidance does not include this level of detail. The lack of this type of specificity in CFTC’s guidance may have contributed to not executing its lease procurements consistent with standards for internal control and thereby not making cost-effective decisions. Although CFTC officials told us that the agency relies on a commercial real estate broker for all phases of the office space acquisition process—including (1) conducting market surveys, advertising CFTC’s requirements and drafting solicitations for offer; (2) analyzing offers received; and (3) reviewing lease documents—this reliance did not prevent the agency from entering into lease agreements before the agency had the funding necessary to staff the space. Federal internal control standards also state that significant decisions need to be clearly documented and readily available for examination. CFTC could only provide us with partial documentation and analysis of how it made decisions to enter into new or expanded leases. CFTC officials told us that they could not locate additional documentation because the employees who had responsibility for leasing had left the agency. Without this documentation, future decision makers may lack the institutional knowledge they need to make informed decisions. Utilizing leading government guidance could have helped CFTC to make reasonable decisions to obtain space that enables the Commission to accomplish its mission in an efficient and cost-effective manner—in keeping with its Statement of General Principles. In its Fiscal Year 2014 Agency Financial Report, CFTC says it plans to review and revise its space-related policies and procedures in keeping with OMB’s National Strategy for efficient use of space and real property. As of February 2016, CFTC officials told us that these policies and procedures are under review, but could not provide any other details or a timeline for completion. Further, when the current leases expire between April 2021 and September 2025, it will have been approximately 10 years since the agency last undertook lease procurement. Without comprehensive policies and institutional knowledge, the agency may be at risk of continuing to make decisions that do not make the best use of limited government resources. As noted above, based on an executive branch memo and initiatives, a GSA study, and our own research, we have identified several options that CFTC may pursue now and in the future to increase space utilization and improve the cost-effectiveness of its leasing arrangements: (1) relocating offices to less costly locations, (2) reducing office space required through increased telework, and (3) consolidating two regional offices—Kansas City and Chicago. CFTC officials told us that these options may not be achievable before their current leases expire. However, they have not fully examined the current feasibility of these options or their potential impact on reducing leased space and increasing cost-effectiveness in the future. Looking ahead, CFTC’s current leases are set to expire from fiscal year 2021 through 2025, and CFTC officials said that a reasonable practice is to begin planning for leasing activities 2 years prior to lease expiration. In the case of high-value leases—those with an annual rent above $2.85 million— GSA’s Leasing Desk Guide suggests the lease acquisition process begin 3 to 5 years prior to lease expiration. In keeping with these time frames, CFTC would begin planning for new leases in the next few years; however, CFTC does not have a timeline for doing so. CFTC’s offices in Washington, D.C., Kansas City, Chicago, and New York City are located in privately owned buildings in close proximity to the financial markets they oversee. According to CFTC, these locations support the agency’s oversight role, as, for example, the Dodd-Frank Act requires CFTC to perform annual examinations of two important derivatives clearing organizations––organizations that process the financial transactions involved in futures trading. The examination of these organizations requires meetings with officials and routine on-site examinations of their operations. However, there are federal buildings in Chicago and New York City also conveniently located within walking distance of the current locations of CFTC’s Chicago and New York City offices. According to CFTC officials, they did not consider leasing space in the federal buildings in these locations during the time they entered into new or expanded leases. Without doing this analysis, CFTC officials could not know whether the federal buildings may have had available space at a lower rent per square foot at the time they entered into lease agreements. As a result, they may not have acquired space in a cost- effective manner, per their Statement of General Principles. CFTC’s Washington, D.C., headquarters is located in the Central Business District submarket, which has one of the highest average rental rates in the region. By comparison, some other federal agencies have located their headquarters outside of downtown Washington, D.C. For example, the Farm Credit Administration, an independent regulatory agency that examines the banks, associations, and related entities of the Farm Credit System, located its headquarters in suburban northern Virginia. Further, the U.S. Department of Commerce’s Economics & Statistics Administration announced, in January 2016, that it plans to move its Bureau of Economic Analysis—approximately 590 employees— from private leased space in downtown Washington, D.C., to federally owned space in suburban Maryland. According to the U.S. Department of Commerce, the new location is expected to save taxpayers $66 million over 10 years. The 2010 presidential memorandum directs executive branch agencies to dispose of unneeded federal real estate, including a specific directive to “take immediate steps to make better use of remaining property assets as measured by utilization and occupancy rates.” Additionally, a fiscal year 2016 House appropriations bill committee report directs CFTC “to find ways to decrease space and renegotiate leasing agreements.” CFTC has conducted some analysis of optimizing space and potential lease- cost reductions for its current locations. CFTC officials said that under current lease agreements, the agency has limited options for negotiating changes in the lease terms. For example, the leases lack provisions that would allow CFTC to terminate leases prior to the agreed-upon term in such a way that CFTC would not still be responsible for the remaining rent payments. However, CFTC has not calculated the complete analysis of potential costs and benefits of relocating offices. Without this type of analysis, CFTC cannot make fully informed decisions about the cost- effectiveness of relocating its offices in the near term nor fully assess alternatives available to improve its space utilization. According to a 2011 GSA study, federal agencies and private sector organizations have been forced to continuously evaluate their current workspace utilization. The Telework Enhancement Act of 2010 required the head of each executive agency to establish and implement a telework policy for eligible employees and requires the Office of Personnel Management to assist agencies in establishing appropriate qualitative and quantitative measures and teleworking goals. GSA’s study states that federal agencies’ expanded use of telework could reduce their real estate footprint and real estate costs. With wireless communication tools, such as smart phones and wireless networking available, federal agencies and private organizations have turned to alternative work environments with the potential to reduce workspace costs and optimize physical workspace. OMB’s National Strategy notes that employee telework, among other things, has resulted in a need for less space. For example, we found in 2013 that some agencies, such as GSA and the U.S. Department of Agriculture’s Forest Service, have adopted “office hoteling arrangements,” a practice of providing office space to employees on an as-needed basis. This reduces the need for an additional amount of physical space that an agency needs to purchase or rent. Specifically, GSA implemented a hoteling program for all employees that allowed it to eliminate the need for additional leased space at four locations in the Washington, D.C., area, resulting in projected savings of approximately $25 million in annual lease payments and about a 38 percent reduction in needed office space. Further, the U.S. Forest Service uses hoteling, among other alternative workplace arrangements, to save an estimated $5 million in annual rent. Currently, 77 percent of CFTC employees have agreements for either recurring or episodic telework (see table 4). According to GSA’s study, the average workspace typically costs between $10,000 and $15,000 annually per person. Eliminating 100 workspaces, for example, could conceivably save an organization over $1 million a year. While CFTC officials told us that they do need an on-site presence in certain cases, such as for oversight and enforcement activities, since commodity futures trading is now wholly electronic, according to CFTC officials, increased teleworking could be a possible alternative to reduce CFTC’s rental space costs in future leases. CFTC officials said that their current policy allows for recurring telework 1 to 2 days every 2 weeks but have not assessed the option of increasing telework and reducing leased space as current leases expire and are renewed. However, CFTC officials said they have efforts under way to consider what policy makes sense for their operations. OMB’s National Strategy states that a key step in improving real property management is to reduce the size of the inventory by prioritizing actions to consolidate, co-locate, and dispose of properties. As discussed, the Kansas City regional office currently has 31 staff working in office space that accommodates 72. In addition, the Kansas City Board of Trade merged with the Chicago Mercantile Exchange in 2012. According to CFTC officials, the Kansas City Board of Trade only traded futures and options for one product (hard-red winter wheat) during all or substantially all the period from fiscal years 2008 through 2012. The majority of Kansas City CFTC staff are involved with enforcement, swap dealer and intermediary oversight, and market oversight––similar to the staff in the Chicago office. Further, the Chicago Regional Office also has underutilized office space and, according to our analysis, could possibly accommodate staff from the Kansas City office. We found that CFTC’s space could be better utilized in both of these regional offices. As noted above, for the Kansas City regional office, CFTC officials have said that they have been unable to return their unused space. According to CFTC officials, they have not assessed the option of possibly consolidating these two regional offices. As a result, CFTC may continue to pay for vacant space through the duration of the Kansas City lease until 2021. While not an exhaustive list, these options—relocation, telework, and consolidation—are in keeping with OMB’s National Strategy to realize the greatest efficiency, reduce portfolio costs, and conserve resources for service and mission delivery. CFTC began planning to substantially expand leased space in anticipation of proposed requirements prior to the enactment of the Dodd- Frank Act. The agency renewed leases and expanded space in its four office locations before fully assessing the risk of not receiving sufficient funding to hire staff to use the space. By not considering this risk, CFTC has taken on the obligation to potentially pay as much as $74 million for unused space over the term of the current leases—a situation that could span more than a decade, given the agency’s lease obligations. We found that CFTC did not have comprehensive leasing policies or procedures in place, but followed some leading government guidance when procuring additional space. This lack of comprehensive policies and procedures presents challenges in making sound management decisions to obtain space in an efficient and cost-effective manner. OMB’s National Strategy states that a key step in improving real property management is to reduce the size of the inventory. Potentially cost-effective options include relocating offices to less costly locations, enhancing teleworking, and consolidating two regional offices—Kansas City and Chicago. Exploring these possibilities and establishing a timeline for completion could result in CFTC’s using its available funds in a more cost-effective manner. To help ensure that the CFTC makes cost-effective leasing decisions, and considers options for reducing future lease costs, we recommend that the Chairman of the CFTC take the following two actions prior to entering into any new or expanded lease agreements: Ensure that as CFTC revises its leasing policies and procedures, it includes comprehensive details on lease procurement that are consistent with leading government guidance and standards to assure cost-effective decisions. Establish a timeline for evaluating and documenting options to potentially improve space utilization and reduce leasing costs including, but not restricted to, (1) moving offices to less costly locations, (2) implementing enhanced telework, and (3) consolidating the Kansas City and Chicago regional offices. We provided a draft of this report to CFTC for review and comment. CFTC provided written comments, which are summarized below and reprinted in appendix IV of this report. CFTC also provided technical comments, which we incorporated as appropriate. CFTC concurred with our first recommendation that prior to entering into any new or expanded lease agreements, as CFTC revises its leasing policies and procedures, it should include comprehensive details on lease procurement that are consistent with leading government guidance and standards to assure cost-effective decisions. CFTC stated that it intends to review its procedures to address the recommendation and to ensure that the agency makes cost-effective decisions. In addition, CFTC noted that its staff will engage the General Services Administration (GSA) regarding how the two agencies can work together to better leverage GSA's leasing expertise in addressing current leasing issues and assessing future space requirements. We are encouraged by these plans, as they have the potential to help CFTC make sound leasing decisions to obtain space in an efficient and cost-effective manner. CFTC generally concurred with the second recommendation, which states that prior to entering into any new or expanded lease agreements, CFTC establish a timeline for evaluating and documenting options to potentially improve space utilization and reduce leasing costs including, but not restricted to, (1) moving offices to less costly locations, (2) implementing enhanced telework, and (3) consolidating the Kansas City and Chicago regional offices. Specifically, CFTC stated that it will develop a timeline and plans for evaluating, initiating, implementing, and documenting space-related actions, especially as the various lease expiration dates approach. CFTC further stated that it will continue to look for actions it can take to make the most efficient use of space. However, CFTC noted that it does not believe it can reduce leasing costs in the near term without incurring significant expense and likely increasing the agency's overall space-related expenses. According to agency officials, CFTC’s leases generally lack provisions that would allow CFTC to terminate leases prior to the agreed-upon term. CFTC also did not specifically agree or disagree to consider the three specific potential options we suggested for consideration. We continue to believe that CFTC should consider these options to make the most efficient use of space prior to entering into any new or expanded lease agreements. The options we suggested are in keeping with OMB’s National Strategy and other agencies’ actions to realize the greatest efficiency, reduce portfolio costs, and conserve resources for service and mission delivery. We will send copies of this report to the appropriate congressional committees and the Commissioner of the Commodity Futures Trading Commission. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at 202-512-2834 or wised@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found at the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This report examines (1) the extent to which CFTC made cost-effective decisions and followed leading government guidance in planning for additional space for fiscal years 2008 through 2015; and (2) what potential options exist to improve the cost-effectiveness of CFTC’s leasing. To identify leading government practices and guidance on leasing, we reviewed GAO reports on real property and held discussions with GSA officials. To identify requirements applicable to CFTC leasing, we reviewed federal laws, and regulations. To address the extent to which CFTC followed leading government practices and guidance for leasing additional space, we reviewed and analyzed CFTC’s strategic plans, lease procurement policies, and space-planning documents covering fiscal years 2008 through 2015. We also reviewed, analyzed and evaluated the extent to which CFTC’s leasing practice aligned with: the Office of Management and Budget’s (OMB) National Strategy for the Efficient Use of Property, (National Strategy) its Reduce the Footprint policy; GSA’s leasing practices and guidance on lease procurement and pricing; and evaluated the extent that CFTC leasing processes were consistent with the Standards for Internal Control in the Federal Government. In addition, we also reviewed and analyzed relevant CFTC’s Office of Inspector General (OIG) reports on space utilization among three of the four CFTC offices. We also obtained and analyzed CFTC data on lease payments, rentable square feet (rsf) and lease expansions, and CFTC’s staffing history. To assess the reliability of CFTC data, we determined which CFTC data were derived from computerized data systems, interviewed cognizant CFTC officials about these systems, and reviewed system documentation. We determined that these data were sufficiently reliable for the purposes of our report. To determine costs per rsf, we divided the lease costs for each CFTC office by its total rentable square footage for fiscal years 2008 through 2015. To determine the impact of CFTC’s excess space on its utilization, we converted the leased space from rsf to useable square feet (usf). We calculated the average rsf per staff member (including CFTC employees and on-site contractors) for fiscal years 2008 through 2010––the period before CFTC expanded existing or entered into new leases—and then determined the conversion factor to align this average with the 300 usf per staff member cited in CFTC’s 2009 Program of Requirements. Using this factor (21.17 percent), we calculated the average usf per staff member for fiscal years 2011 through 2015––the period after CFTC expanded existing or added new leases. To determine how CFTC lease costs compare to the average cost per rsf of other federal agencies leasing space in commercial buildings in the four markets where CFTC offices are located, we analyzed data from GSA’s lease inventory for fiscal years 2013 through 2015––the years for which these data are available. We combined the monthly GSA lease inventory reports into fiscal years and then sorted the data by leases by state and county matching those where CFTC maintains offices. Next, to better approximate CFTC leases, we sorted the data to include only those offices that were 100 percent office space and “fully serviced,” before dividing rsf by lease costs to determine the cost per square foot for each lease. We then sorted the leases by size to approximate the size range of CFTC leases and by location to include only the cities in which CFTC maintains offices. To illustrate how CFTC lease costs may compare to the private sector, we analyzed data from the Building Owners and Managers Association (BOMA). Specifically, using BOMA’s Experience Exchange Report (EER) survey data for the four markets, we sorted the data to include privately owned buildings within the city limits where CFTC maintains offices and choose the BOMA average cost per rsf for Office Rent Income category. We confirmed with BOMA officials that “Office Rent Income” from the building owners’ perspective was the equivalent of cost per rsf from the tenant perspective. BOMA’s EER survey data have not yet been compiled for fiscal year 2015. To assess the reliability of these data, we interviewed GSA and BOMA officials about how they collect and maintain the data, as well as the completeness of the data, and we determined that the data were sufficiently reliable for the purposes of our report. However, BOMA does not collect EER survey data in a way that allows for an assessment of survey coverage, that is, there is no information available to measure the percentage of buildings in any given market that are included in the data, nor is there any information available to measure the extent to which particular types of buildings may be under-or over-represented. Therefore, the measures of lease cost per square foot resulting from BOMA’s EER survey data are not generalizable to other buildings in those markets for which no BOMA survey data were reported. However, when reporting measures of cost per square foot from the BOMA EER survey data, we include the number of buildings with reportable data from which the measure was derived. We attempted to use Federal Real Property Profile (FRPP) data to determine per-square-foot lease costs between fiscal years 2008 through 2014, but based on our analysis of the data and meetings with GSA, we determined that the data were unsuitable for that purpose. To identify what potential options exist that CFTC could consider towards improving the cost-effectiveness of future lease procurement, we reviewed and analyzed CFTC’s legal authority to lease properties. We also obtained and analyzed CFTC leases and conducted site visits at each of the four offices (Washington, D.C., headquarters, (Kansas City, MO; Chicago, IL; and New York, NY). We interviewed CFTC officials at CFTC Headquarters and all of the regional offices about their business processes, staffing, and space procurement planning and management procedures. Additionally, we interviewed CFTC Office of Inspector General (OIG) officials about their findings and ongoing reviews on CFTC space utilization. Furthermore, we interviewed GSA officials to understand their perspectives on lease procurement, including those by agencies with independent leasing authority. Using our analysis of CFTC leases, space procurement planning documents, policies and procedures, our interviews with agency officials, along with our review of a current presidential memorandum, OMB real property management initiatives and GSA leasing guidance, we identified several potential options CFTC may consider to improve the cost-effectiveness of its lease portfolio. We conducted this performance audit from June 2015 to April 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A. General Principles: The general principles governing Commodity Futures Trading Commission (“CFTC” or “Commission”) acquisition of office space include: Maximizing competition to the extent practicable; Avoiding conflicts of interest; Adhering to the requirements of Procurement Integrity; and Reasoned decision-making to obtain space that enables the Commission to accomplish its mission in an efficient and cost-effective manner. Although neither the Federal Acquisition Regulation (FAR) nor the General Services Acquisition Manual (GSAM) specifically apply to the acquisition of office space by CFTC, the principles cited above are embodied in those documents. Accordingly, CFTC has chosen to comply with aspects of their requirements that facilitate these ends. This is discussed further below. B. Applicability of Regulations and Policies: The FAR does not apply to the acquisition of leased office space. Specifically, the scope of the FAR’ s coverage is defined in Section 1.104 as follows: “The FAR applies to all acquisitions as defined in: Part 2 of the FAR, except where expressly excluded.” According to Part 2.101(b), the term “acquisition” is defined as the “acquiring by contract with appropriated funds of supplies or services (including construction) by and for the use of the Federal Government through purchase or lease, whether the supplies or services are already in existence or must be created, developed, demonstrated, and evaluated.” The term “supplies” is defined as “all property except land or interest in land.” Because a lease of real property, including a lease of office space, is an interest in land, it is not a “supply” and the FAR does not apply. The GSAM is also inapplicable to the lease of office space by CFTC. The GSAM applies to the acquisition of leased office space by GSA and any agencies delegated independent leasing authority by GSA and so required by GSA to use the GSAM. CFTC’s independent leasing authority was mandated by its authorizing legislation and not by GSA. Accordingly the GSAM is not required for use by CFTC in its acquisition of office space for lease. In the absence of explicit regulatory direction, CFTC has chosen to comply with aspects of these documents that facilitate the principles cited above. The GSAM is used specifically for its guidance as to the considerations and findings necessary to support a lease procurement by other than full and open competition. Additional guidance and processes that may be applicable to lease acquisition are contained in CFTC’s acquisition policy. The acquisition process for all lease awards begins with requirements definition and completion of market research. Market research is used to determine whether CFTC is best served by an open market competitive acquisition or a follow-on award to the incumbent lessor. I. Steps in a competitive acquisition of leased office space are as follows: Develop Program of Requirements Conduct market survey Define delineated area Formulate an acquisition strategy Advertise requirement Review expressions of interest Tour properties Develop a solicitation list Draft and issue a Solicitation for Offers Draft a Technical Evaluation Plan Designate a Technical Evaluation Committee (TEC) Evaluate initial offers Complete initial TEC report Complete price analysis Complete Phase II Determination (assumes negotiation, otherwise, the Contracting Officer will draft a Source Selection Statement at this time) Conduct negotiations Solicit and evaluate revised offers Complete Final TEC Report Contracting Officer completes Source Selection Statement Memorialize terms of agreement between the parties in a lease document 2. Steps in award of a lease by other than full and open competition are as follows: Develop Program of Requirements Conduct market survey Complete Justification for Other Than Full and Open Competition Conduct negotiations Memorialize terms of agreement between the parties in a lease document The functional objective of the acquisition process described herein is to acquire office space in a building that efficiently supports CFTC’s mission; provides a high quality work environment; and offers a satisfactory breadth and variety of amenities. This outcome must be met in a manner that maximizes value to the Commission, considering price and technical factors. It must be provided at a price that is fair and reasonable. II. Construction of Space: CFTC’s office space is constructed in accordance with the terms of its office space lease agreements. Construction contracts and trade subcontracts, as appropriate, are awarded based on a competitive process that results in fair and reasonable pricing. CFTC’s Contracting Officer is privy to bid information and, in consultation with CFTC’s architect, project manager, and other knowledgeable Commission personnel, approves project pricing as well as any required contract change orders. III. Administration of Leases: CFTC’s Contracting Officer is responsible for analyzing rent-related charges and authorizing payment as appropriate. The Contracting Officer is also responsible for addressing with the landlord any issues pertaining to lease compliance. The Office of Management Operations is responsible for day-to-day facility operational matters and consults with the Contracting Officer on lease-related issues as appropriate. In addition to the contact named above, Amelia Bates Shachoy (Assistant Director), Lindsay Madison Bach, Dwayne Curry, Lawrance Evans, Terence Lam, Hannah Laufe, Sara Ann Moessbauer, Minette Richardson, Amelia Michelle Weathers, and Crystal Wesco made key contributions to this report.
The CFTC regulates certain financial markets, and the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) expanded its regulatory responsibilities. Prior to enactment of the Dodd-Frank Act, in anticipation of these increased responsibilities, the agency began planning for more space to accommodate additional staff in each of four office locations. GAO was asked to review CFTC's staffing, leasing practices, and costs. This report examines: (1) the extent to which CFTC made cost-effective decisions and used leading government guidance in planning for additional space in fiscal years 2008 through 2015 and (2) potential options to improve the cost-effectiveness of CFTC's future leasing. GAO (1) reviewed applicable federal laws, regulations, and guidance that apply to real property leasing and CFTC's space-planning documents and leases for the fiscal years 2008 through 2015; (2) analyzed data and conducted interviews with key officials from CFTC and GSA; and (3) visited all four CFTC offices. The Commodity Futures Trading Commission (CFTC) did not make cost-effective decisions consistent with leading government guidance for lease procurement and internal controls when planning for additional space in fiscal years 2008–2015. CFTC began planning for expansion in the fiscal year 2009 time frame—more than a year before the enactment of the Dodd-Frank Act in July 2010. CFTC renewed leases and expanded space in its Washington, D.C., headquarters and three regional offices in anticipation of receiving funding to hire additional staff but did not receive the amounts requested. As a result, CFTC has lease obligations for currently unused space some of which extends through 2025. Overall, the total occupancy level for all four offices combined was about 78 percent as of the end of fiscal year 2015, and each office has different occupancy levels, as shown in the figure below. CFTC has independent authority to lease real property, including office space. The two documents CFTC uses to guide the lease procurement process provide some high-level guidance on this process, but the documents do not establish specific policies and procedures to help ensure cost-effective decisions. By comparison, leading government guidance, from the General Services Administration (GSA) includes comprehensive details on lease procurement. The lack of this type of detail may have contributed to CFTC's making decisions that were not cost-effective. GAO identified several potential options that CFTC may pursue now and in the future to increase space utilization and improve the cost-effectiveness of its leasing arrangements: (1) relocating offices to less costly locations, (2) reducing office space requirements through enhanced telework, and (3) consolidating two regional offices—Kansas City and Chicago. CFTC officials told GAO that these options may not be feasible; however, the officials have not fully assessed these options or their potential for improving cost-effectiveness and do not have a timeline for doing so. To help ensure cost-effective leasing decisions, GAO recommends that CFTC (1) ensure that its revised leasing policies and procedures incorporate leading government guidance and (2) establish a timeline for evaluating and documenting options to potentially improve space utilization and reduce leasing costs. CFTC generally concurred with GAO's recommendations but noted that it would not be able to take actions to reduce lease costs in the near term.
GPRA calls for agencies to address human capital in the context of performance-based management and specifically requires that annual performance plans describe how agencies will use their human capital to support the accomplishment of their goals and objectives. In addition, OMB’s fiscal year 2001 guidance for agencies’ annual performance plans (OMB Circular No. A-11, Part 2) states that agencies’ annual plans may include agencywide goals for internal agency functions and operations, such as employee skills and training, workforce diversity, retention, downsizing, and streamlining. Building on the fiscal year 2001 guidance, the fiscal year 2002 guidance notes the increased emphasis on the use of workforce planning and other specific strategies that align human capital with the fulfillment of an agency’s mission. The fiscal year 2002 guidance specifies for the first time that agencies should include performance goals covering human capital management areas, such as recruitment, retention, skill development and training, and appraisals linked to program performance. We have noted that the more useful annual performance plans discuss, or refer to a separate plan, the human capital needs—in terms of knowledge, skills, and abilities—necessary for achieving goals and the workforce planning methods by which these needs were determined. Also, the more useful plans describe how strategies—in such areas as recruitment and hiring, retention and separation, training and career development, employee incentives, and accountability systems—meet workforce needs and support the achievement of goals. Addressing the federal government’s human capital challenges is a responsibility shared by many parties. We have noted that OPM and OMB have substantial roles to play in fostering a more results-oriented approach to strategic human capital management across the government.OPM has begun stressing to agencies the importance of integrating strategic human capital management with agency planning. OPM has also rolled out a workforce planning model, with associated research tools, and has launched a Web site to facilitate information-sharing about workforce planning issues. In addition, OPM has published A Handbook for Measuring Employee Performance: Aligning Employee Performance Plans with Organizational Goals. Recently, OPM revised the Senior Executive Service performance management regulations so that a balanced scorecard of customer satisfaction, employee perspectives, and organizational results is to be used by agencies to evaluate executives’ performance. OPM’s sustained commitment and attention will be critical to making a real difference in the way federal agencies manage human capital. It is likely that OPM will continue moving from “rules to tools,” and that its most valuable contributions in the future will come less from traditional compliance and approval activities than from its initiatives for assisting agencies as a strategic partner. For example, we have noted that OPM could make a substantial contribution by continuing to review, streamline, and simplify OPM regulations and guidance to determine their continued relevance and utility. Related to this, we also have noted that OPM could make human capital flexibilities and best practices more widely known to agencies by communicating “how to” success stories and taking full advantage of its ability to facilitate information-sharing and outreach to human capital managers throughout the federal government. While OMB had played a limited role in strategic human capital management, recent actions show OMB’s growing interest and potential importance in working with agencies to ensure that they have the human capital capabilities needed to achieve their strategic goals and missions. First, the President’s fiscal year 2001 budget gave new prominence to human capital management by making the alignment of federal human capital to support agency goals a Priority Management Objective. Another positive step is the increased attention to strategic human capital issues in OMB’s Circular No. A-11, Part 2, guidance to agencies on preparing the fiscal year 2002 performance plans. Most recently, in another important and positive step, the President’s fiscal year 2002 budget notes that the current civil service system does not do all it should to reward achievement or encourage excellence and limits the ability of agencies to compete successfully for highly skilled senior talent. The budget states that the administration will seek legislation to provide program managers new and expanded workforce restructuring tools. According to the budget, these actions, combined with improved accountability through better linkage of program performance with budget decisionmaking and other reforms, will make the federal government more responsive and effective. OMB is perfectly positioned to assume greater leadership over governmentwide strategic human capital issues. First, given its central role in the budget process and responsibility for overall leadership over executive branch management improvement, OMB has the ability to leverage the cabinet secretaries and deputy secretaries to help ensure that their agencies view strategic human capital management as critically important in their overall strategic planning, performance management, and budgeting efforts. Second, OMB has the ability through resource allocations to help ensure that agencies give greater attention to the linkages between agency missions and the human capital needed to pursue them. To meet our objective, we reviewed the fiscal year 2001 performance plans that the 24 CFO agencies submitted to Congress. In addition, we reviewed our individual reports on agencies’ fiscal year 2001 performance plans; GPRA requirements for agencies’ performance plans; guidelines contained in OMB’s Circular No. A-11, Part 2; and our guidance, reports, and testimonies discussing strategic human capital management. We selected examples based on our guides to assist agencies and Congress with effectively implementing GPRA, specifically our guides to improving the usefulness of agency performance plans. We did our work from September 2000 to March 2001 in Washington, D.C., in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Director of OMB and the Acting Director of OPM, and we asked officials in each of the agencies profiled to verify the accuracy of the information presented from their respective fiscal year 2001 performance plans. We incorporated their comments where applicable. We did not independently verify the accuracy of the information contained in the agencies’ fiscal year 2001 performance plans. Agencies’ fiscal year 2001 performance plans covered a variety of human capital activities, as illustrated by the following examples. 1. Recruit and Retain Employees on the Basis of Current and Projected Needs. A high-performing organization identifies the best strategies for filling its talent needs through recruiting and hiring and follows up with appropriate investments to develop and retain the best possible workforce. Agencies’ performance plans reflected different levels of attention to employee recruitment and retention. As part of a discussion of challenges confronting the agency, the U.S. Mint’s fiscal year 2001 plan notes that a key issue facing the agency is its ability to attract and retain employees with the skills needed to perform the agency’s unique mission. The Mint explains that it is difficult for the agency to successfully compete for and retain employees with skills more commonly found in the private manufacturing and marketing sectors as well as in information technology. The Mint notes that it is exploring ways to increase the attractiveness of employment, such as rewarding and evaluating employees in accordance with its strategic plan goals and objectives, although the plan does not provide any performance goals or measures directly associated with the Mint’s human capital challenges or initiatives. Going beyond describing challenges facing the agency, the Department of Justice set a goal to “strengthen human resource recruitment and retention efforts, providing for a workforce that is well-trained and diverse.” Specifically, Justice states that on the basis of an assessment of recruitment and retention issues, the Immigration and Naturalization Service (INS) needs to recruit and retain qualified Border Patrol agents. Consequently, INS set a fiscal year 2001 target to increase its deployment of border patrol agents to 9,807 from the level of 7,982 achieved in fiscal year 1998. (See fig. 3.) As part of its human capital strategy to increase the number of these agents on board, the plan states that INS will train over 200 border patrol agents as recruiters, establish a toll-free job information hotline, and consider recruitment bonuses. Consistent with GPRA requirements for annual performance plans, INS also describes its procedures to validate and verify its data on the number of agents on board. Our work has found that the Department of Defense (DOD) faces an especially significant challenge in retaining the hundreds of thousands of new recruits it enlists each year. DOD set a performance goal to “recruit, retain, and develop personnel to maintain a highly skilled and motivated force capable of meeting tomorrow’s challenges” and states it will use four measures—enlisted recruiting, recruit quality benchmarks, active retention rates, and reserve attrition rates—to demonstrate its progress in meeting this goal. Specifically, for its enlisted recruiting measure, DOD set a fiscal year 2001 target of 205,248 for new active force personnel and notes that this target allows for discharges, promotions, and anticipated retirements to maintain statutorily defined military end-strengths. DOD reports that in fiscal year 1999 it recruited 186,600 personnel and had not met its target of 194,500. According to its plan, DOD intends to improve the results of its recruiting efforts by expanding advertising, increasing the number of recruiters, and providing enhanced enlistment bonuses. 2. Hire a Diverse Workforce. A high-performing organization maintains an environment characterized by inclusiveness that reflects a variety of styles and personal backgrounds and is responsive to the needs of diverse groups of employees. Agencies present different approaches in their performance plans to address diversity. For example, the National Science Foundation focuses on the total number of hires to science and engineering positions from underrepresented groups. The Department of the Interior plans to increase the general diversity of its workforce rather than the growth rate of specific groups and set a goal to increase the diverse representation of its total workforce by at least 3.1 percent for fiscal year 2001 from an unspecified fiscal year 1997 level. Interior states that more detailed supporting documents are being developed. The Department of Housing and Urban Development (HUD), on the other hand, set a goal to continue to improve its workforce diversity by increasing the percentage of specific underrepresented groups, including Hispanics, women, and women and minority managers, by 0.3 percentage points. For example, HUD intends to focus on increasing the share of Hispanics to 7.4 percent of employees in fiscal year 2001, based on estimated achievement of 7.1 percent representation in fiscal year 2000. Hiring a diverse workforce can be one aspect of ensuring that HUD has the appropriate mix of staff with the proper skills to carry out its missions. HUD’s human capital has been an area of focus under our high-risk program since 1994.3. Identify Skills and Training Needs and Provide Development Opportunities. A high-performing organization makes appropriate investments in education, training, and other developmental opportunities to help its employees build the competencies needed to achieve the organization’s mission. Some agencies describe in their fiscal year 2001 performance plans their efforts to identify the skills and training needs of employees. For example, the Federal Technology Service (FTS), a major component of the General Services Administration (GSA), describes its strategy to identify core competencies for each profession, create individual development plans, and provide employees with state-of-the-art technology and tools to help improve overall performance. Although it did not provide details, FTS also plans to increase its investment in employee training by providing employees with an individual training budget of 1 percent of salaries—in addition to the normal 4 percent training allocation—which is intended to allow employees to have a direct role in their own development. To measure progress towards its performance goal to provide increased opportunities for employee development and respond to employee needs, FTS used a survey that GSA administered to over 13,000 employees on FTS’ quality culture and organizational climate. The survey used a 7-point response scale and included 87 questions covering 15 categories, such as learning and development, communication, and teamwork. To measure its performance, FTS uses a composite indicator that combines the responses across all the questions. However, the GSA plan does not discuss the validity of the composite indicator as a measure of performance or its usefulness to GSA in pinpointing improvement opportunities. A few fiscal year 2001 performance plans describe agencies’ strategies to focus attention and resources on employee development. For example, the Environmental Protection Agency (EPA) plan states that the agency faces a future of formidable programmatic challenges, accelerating change, and competition in recruiting people with the skills needed to effectively carry out its mission. To address these concerns, EPA recognizes that it needs to make a continual investment in developing its workforce. EPA reported that it conducted a workforce assessment that identifies critical skills needed through 2020. Although the plan does not provide details on the needed skills, it describes several training programs based on this workforce assessment. These programs include (1) the New Skills/New Options, which will equip support staff with needed skills to assume vital roles in EPA; (2) the Mid-level Development Program, which will provide crosscutting skills and competencies to mid-level employees to enable them to be successful in a more dynamic and interdependent workplace; (3) the Leadership Development Program, which will develop supervisors, managers, and executives who will foster continued professional development; and (4) the EPA Intern Program, which will recruit a cadre of diverse employees. We recently reported that one of EPA’s performance and accountability challenges is to place greater emphasis on developing a comprehensive human capital approach. The Department of Education surveyed its managers on their staffs’ knowledge and skills for carrying out Education’s mission. Education then set a goal for fiscal year 2001 that 70 percent of its managers, from a fiscal year 1998 baseline of 58 percent, agree that staff possess adequate knowledge and skills. (See fig. 4.) Education’s strategy for achieving its goal is to introduce a broad range of new training and development programs in a variety of formats, such as customized training for teams; and to continue to provide development opportunities for its employees, such as university course offerings, programs sponsored by the U.S. Department of Agriculture (USDA) Graduate School, and the on-line Learning Network. Consistent with the practices that we have reported that can make performance plans more useful, Education describes its data source to measure this goal, validation procedures, and data limitations.4. Link Executive Performance to Organizational Goals. A high- performing organization aligns performance expectations for its leaders with organizational goals to enhance accountability for performance. We recently reported on the emerging benefits from selected agencies’ use of performance agreements with their senior political and career executives as one approach to defining accountability for specific goals, monitoring progress during the year, and then contributing to performance evaluations. Two agencies discussed in that report—the Department of Transportation (DOT) and the Department of Veterans Affairs’ (VA) Veterans Health Administration (VHA)—discuss in their fiscal year 2001 performance plans their continued use of performance agreements. For example, DOT states that it is cascading the performance agreement goals between the Secretary and administrators or departmental officers to all Senior Executive Service members within the framework of its performance evaluation system. VHA’s implementation of individual performance agreements is negotiated between the Under Secretary for Health and all senior executives in VHA. These agreements contain quantifiable performance targets and other organizational priorities to which executives are held accountable for achieving. In addition to a discussion on performance agreements, VA cites in its plan other activities under way to enhance accountability for performance. For example, the Veterans Benefits Administration (VBA) integrated a balanced scorecard of performance measures into its executive appraisal system on such areas as service delivery, customer service, and employee satisfaction. VBA has established an automated Balanced Scorecard that is available to all employees via VA’s Intranet and reports results at both the operational and strategic levels. VA also notes in its fiscal year 2001 performance plan that the National Cemetery Administration developed and is using consistent performance standards for all cemetery directors that are linked to its strategic goals. These performance standards address the areas of customer service and stewardship of VA’s national cemeteries, employee development, and cemetery operations. 5. Attend to Work Environment. A high-performing organization provides employees appropriate technology to perform their work and a safe environment to help elicit their best performance. As indicated in their performance plans, some agencies are trying to create work environments that support employee performance. For example, in USDA’s fiscal year 2001 performance plan, the National Agricultural Statistics Service (NASS) links the work environment—and particularly the availability of information technology—to improved performance. Specifically, NASS set a target in fiscal year 2001 that 90 percent of employees “strongly agree” or “agree” that the work environment is not an impediment to doing their jobs well. The plan reports that in both fiscal years 1998 and 1999, 80 percent of employees “strongly agree” or “agree” that the work environment was not an impediment; the target for fiscal year 2000 was 85 percent. To accomplish this goal, NASS plans to implement new systems, such as a local area network, video conferencing, and document archiving and retrieval systems. According to the plan, NASS will use periodic organizational climate surveys to track employees’ ratings of their work environment. The Social Security Administration (SSA) set a strategic objective “to provide a physical environment that promotes the health and well being of employees.” As one of its performance measures, the agency will use a survey to measure the percentage of employees reporting that they are satisfied with the level of security in their facilities. SSA set a goal for fiscal year 2001 that 75 percent of employees are satisfied with security in their facilities, compared to 64 percent in fiscal year 1998. (See fig. 5.) To achieve this objective, SSA states in its plan that it will continue to enhance ongoing programs for assessing and addressing security requirements and for identifying and resolving health and safety problems in the workplace. 6. Establish an Employee-Friendly Workplace. A high-performing organization provides work-life programs and services that improve an employee’s ability to balance work and family obligations and enhance job satisfaction. Some agency performance plans discuss how they intend to improve the workplace through expanded work-life services. For example, the Department of Labor states that it will continue to offer referral services for employees in the work-life areas of childcare, elder care, and adoptive services. Also, the plan states that services will include a toll-free “1-800” telephone counseling service and the use of the Internet as an additional feature to access referral resources. In addition, Labor will review internal practices and procedures to improve worker accommodations, among other things. To improve job satisfaction and the quality of work life, the Indian Health Services (IHS) of the Department of Health and Human Services set a performance measure to improve its overall Human Resource Management Index (HRMI) score to at least 95 points in fiscal year 2001 from a baseline of 93 points in fiscal years 1998 and 1999. The HRMI employee survey measures 14 different work-related issues, such as management culture and employee morale, and is used to determine if the agency’s human resource program is meeting employee and management expectations. According to its plan, IHS expects to raise the HRMI score by at least one point each year. The IHS plan notes that the HRMI has been in use at the Department since 1991 and is designed to be a valid measure of management practices that are important to organizational performance. The IHS plan also notes that the agency is taking several actions to improve its HRMI score, but IHS does not detail how the actions will help it achieve the one-point improvement goal and, ultimately, organizational performance. 7. Choose an Appropriate Organizational Structure. A high- performing organization recognizes the importance of choosing a structure that supports the organization’s mission and takes into account its present and future needs. The transition to a knowledge-based government will increasingly prompt federal agencies to adopt flat, flexible, and team-oriented organizational structures. For example, the Department of Health and Human Service’s Administration for Children and Families (ACF) states in its fiscal year 2001 performance plan that it intends to reduce bureaucratic levels and rely more on teams in an effort to support its strategic goal to build a results-oriented organization. Specifically, ACF set a goal to increase manager-to-staff ratio from 1:4.6 in fiscal year 1993 to 1:9 in fiscal year 2001. ACF reports that the fiscal year 1999 target was not attained and lowered its fiscal year 2000 target because of staff separations and severe outside hiring limitations. ACF provides trend data from fiscal year 1995 and cites personnel data as the data source. 8. Streamline, Simplify, and Expedite Personnel Operations. A high-performing organization tailors its personnel operations to quickly bring needed talent on board and to make progress in the “war for talent” in a competitive, knowledge-based economy. Some agency performance plans describe their efforts to make their human capital processes more efficient. According to OPM’s fiscal year 2001 performance plan, its Office of Human Resources and Equal Employment Opportunity (OHREEO) works as a business partner with OPM managers to assist them in maximizing the use of the agency’s human resources toward accomplishing OPM’s goal to recruit, develop, and maintain a highly skilled and diverse workforce. In support of this goal, OHREEO plans to streamline and automate staffing processes and, specifically, to reduce all recruitment and hiring cycle times to an average of 48 days in fiscal year 2001 from a baseline average of 58 days in fiscal year 1999. The plan notes that the data on cycle time are retrieved from various documents that track the dates when (1) OHREEO receives the Request for Personnel Action, (2) the vacancy announcement opens, and (3) the selection list is sent to the selecting official. The Department of Commerce describes “Commerce Opportunities On- Line” (COOL) as an automated, Web-based vacancy announcement, application, and referral system. This system is intended to broaden Commerce’s distribution of vacancy announcements to anyone who has access to the Web, reduce the time to disseminate announcements to applicants, and provide applicants an on-line avenue for submission of their applications. COOL also is to benefit Commerce managers by permitting on-line issuance of referral lists of eligible candidates to the selecting official, facilitating e-mail communication between the selecting official and applicant, and allowing the selecting official to automatically notify the Human Resource Office when he or she has completed the selection process. Designing, implementing, and maintaining a strategic human capital management focus are critical to maximizing the performance and ensuring the accountability of the federal government for the benefit of the American people. We found that the ways in which agencies discussed human capital challenges in their fiscal year 2001 performance plans reflected different levels of attention to the critical human capital challenges agencies face. When viewed collectively, we found that there is a need to increase the breadth, depth, and specificity of many related human capital goals and strategies. The plans’ discussions of human capital should continue to show progress in moving away from form to substance, or from simply describing human capital challenges to detailing the what, why, how, and when of the strategies to address those challenges. The discussions should also demonstrate a better link between human capital management and the agencies’ strategic and programmatic planning to maximize performance and ensure the best resource allocation. Overall, with the increasing attention to human capital, the fiscal year 2001 plans showed that substantial opportunities exist for improvements, and we expect that agencies will continue to refine their goals and strategies as they focus on a more systematic, in-depth, and continuous effort to evaluate and improve their human capital management. Agencies will need to follow up through effective implementation and assessment to determine whether their plans lead to improvements in human capital management and programmatic outcomes. We provided a draft of this report to the Director of OMB and the Acting Director of OPM on January 23, 2001, for their review and comment. We subsequently shared with the Director of OMB and the Acting Director of OPM portions of the draft that we updated to reflect our testimonies on human capital and the President’s fiscal year 2002 budget. The Director of OMB did not provide written comments; however, OMB staff provided technical suggestions, which we have incorporated where appropriate. OPM’s Acting Director provided written comments in his February 9, 2001, letter, which is included in appendix I. In that letter, he stated that this report would make a useful contribution to the ongoing agency implementation of GPRA and to the increased attention to the federal government’s strategic human capital management. He also provided several suggestions that OPM believes would make the report even more useful. OPM suggested, and we have included in the report, a reference to its 1999 publication A Handbook for Measuring Employee Performance: Aligning Employee Performance Plans with Organizational Goals as another tool to help agencies in their efforts. OPM stated that the draft report “leaves an impression that agencies have not taken any previous actions to include human resources management actions in their strategic and annual planning” and did not acknowledge other examples of human capital activities under way. As the draft report noted, the scope of this review focused on human capital activities discussed in agencies’ performance plans for fiscal year 2001. Therefore, we did not consider prior performance or strategic plans, or other human capital activities in the analysis for this report. OPM suggested that we be clear that the report does not include all instances of human capital planning that we found in the fiscal year 2001 performance plans. We did not intend to imply that the report describes each reference to human capital issues in agencies’ annual performance plans. Rather, as stated in the draft, we selected examples based on our guides to assist agencies and Congress with effectively implementing GPRA, specifically our guides on improving the usefulness of agency performance plans. Finally, OPM suggested that it would be helpful to emphasize that managing the effective tactical use of several existing flexibilities is its own challenge, particularly in view of the limited resources, and agencies would do well to develop coordinated approaches to their use. This point is beyond the objective and scope of our review and, therefore, not included in our discussion. We are sending copies of this report to Senator Joseph Lieberman, Ranking Member of the Senate Committee on Governmental Affairs; Senator Thad Cochran, Chairman of the Senate Governmental Affairs’ Subcommittee on International Security, Proliferation, and Federal Services; Representative Henry Waxman, Ranking Minority Member of the House Committee on Government Reform; the Honorable Mitchell E. Daniels, Jr., Director of OMB; Steven R. Cohen, Acting Director of OPM; and other interested parties. We will also make this report available to others upon request. If you have any questions about this report, please contact me or Lisa Shames on (202) 512-6806. Key contributors to this report were Dottie Self and Janice Lichty.
The Government Performance and Results Act calls for agencies to address human capital in the context of performance-based management. The act requires that annual performance plans describe how agencies will use their human capital to accomplish their goals and objectives. Designing, implementing, and maintaining a strategic human capital management focus are critical to maximizing performance and ensuring that government is accountable to the American people. GAO found that the human capital challenges described in fiscal year 2001 performance plans reflected the different levels of attention agencies are to pay this critical issue. GAO contends that the breadth, depth, and specificity of many related human capital goals and strategies needs to be increased. The plans' discussions of human capital increasingly need to focus on describing human capital challenges. The plans need to specify the what, why, how, and when of the strategies to address those challenges. The discussions should also better link human capital management and the agencies' strategic and program planning to maximize performance and ensure optimal resource allocation. Overall, the fiscal year 2001 plans showed that substantial opportunities exist for goals and strategies as they focus on a more systematic, in-depth, and continuous effort to evaluate and improve their human capital management. Agencies will need to follow up to determine whether their plans actually improve human capital management and program outcomes.
According to FPS officials, since 2010 the agency has required its guards to receive training on how to respond to an active-shooter scenario. However, as our 2013 report shows, FPS faces challenges providing active-shooter response training to all of its guards. According to FPS officials, the agency provides guards with information on how they should respond during an active-shooter incident as part of the 8-hour FPS- provided orientation training. FPS officials were not able to specify how much time is devoted to this training, but said that it is a small portion of the 2-hour special situations training.documents, this training includes instructions on how to notify law enforcement personnel, secure the guard’s area of responsibility, appropriate use of force, and direct building occupants according to emergency plans. According to FPS’s training However, when we asked officials from 16 of the 31 contract guard companies we spoke to if their guards had received training on how guards should respond during active-shooter incidents, responses varied. For example, of the 16 contract guard companies we interviewed about this topic: officials from eight contract guard companies stated that their guards have received active-shooter scenario training during FPS orientation; officials from five guard companies stated that FPS has not provided active-shooter scenario training to their guards during the FPS- provided orientation training; and officials from three guard companies stated that FPS had not provided active-shooter scenario training to their guards during the FPS- provided orientation training, but that the topic was covered at some other time. We were unable to determine the extent to which FPS’s guards have received active-shooter response training. Without ensuring that all guards receive training on how to respond to active-shooter incidents, FPS has limited assurance that its guards are prepared for this threat. FPS agreed with our recommendation that they take immediate steps to determine which guards have not received this training and provide it to them. As part of their 120 hours of training, guards must receive 8 hours of screener training from FPS on how to use x-ray and magnetometer equipment. However, in our September 2013 report, we found that FPS has not provided required screener training to all guards. Screener training is important because many guards control access points at federal facilities and thus must be able to properly operate x-ray and magnetometer machines and understand their results. In 2009 and 2010, we reported that FPS had not provided screener training to 1,500 contract guards in one FPS region. In response to our reports, FPS stated that it planned to implement a program to train its inspectors to provide screener training to all of its contract guards. However, 3 years after our 2010 report, guards continue to be deployed to federal facilities who have never received this training. For example, an official at one contract guard company stated that 133 of its approximately 350 guards (about 38 percent) on three separate FPS contracts (awarded in 2009) have never received their initial x-ray and magnetometer training from FPS. The official stated that some of these guards are working at screening posts. Further, officials at another contract guard company in a different FPS region stated that, according to their records, 78 of 295 (about 26 percent) guards deployed under their contract have never received FPS’s x-ray and magnetometer training. These officials stated that FPS’s regional officials were informed of the problem, but allowed guards to continue to work under this contract, despite not having completed required training. Because FPS is responsible for this training, according to guard company officials no action was taken against the company. Consequently, some guards deployed to federal facilities may be using x- ray and magnetometer equipment that they are not qualified to use ─thus raising questions about the ability of some guards to execute a primary responsibility to properly screen access control points at federal facilities. We were unable to determine the extent to which FPS’s guards have received screener training. FPS agreed with our recommendation that they take immediate steps to determine which guards have not received screener training and provide it to them. In our September 2013 report, we found that FPS continues to lack effective management controls to ensure that guards have met training and certification requirements. For example, although FPS agreed with our 2010 and 2012 recommendations to develop a comprehensive and reliable system for contract guard oversight, it still does not have such a system. Without a comprehensive guard management system, FPS has no independent means of ensuring that its contract guard companies have met contract requirements, such as providing qualified guards to federal facilities. Instead, FPS requires its guard companies to maintain files containing guard-training and certification information and to provide it with a monthly report containing this information. In our September 2013 report, we found that 23 percent of the 276 guard files we reviewed (maintained by 11 of the 31 guard companies we interviewed) lacked required training and certification documentation.some guard files lacked documentation of basic training, semi-annual firearms qualifications, screener training, the 40-hour refresher training (required every 3 years), and CPR certification. Risk assessments help decision-makers identify and evaluate security risks and implement protective measures to mitigate the potential undesirable effects of these risks. ISC’s risk assessment standards state that agencies’ facility risk assessment methodologies must: consider all of the undesirable events identified by ISC as possible risks to federal facilities, and assess the threat, vulnerability, and consequence of specific undesirable events. Preliminary results from our ongoing review of nine federal agencies’ risk assessment methodologies indicate that several agencies, including FPS, do not use a methodology that aligns with ISC’s risk assessment standards to assess federal facilities. Most commonly, agencies’ methodologies are not consistent with ISC’s standards because agencies do not assess their facilities’ vulnerabilities to specific undesirable events. For example, officials from one agency told us that their vulnerability assessments are based on the total number of protective measures in place at a facility, rather than how vulnerable the facility is to specific undesirable events, such as insider attacks or vehicle bombs. Because agencies’ risk assessment methodologies are inconsistent with ISC’s risk assessment standards, these agencies may not have a complete understanding of the risks facing approximately 57,000 federal facilities located around the country—including the 9,600 protected by FPS and several agencies’ headquarters facilities. Moreover, because risk assessments play a critical role in helping agencies tailor protective measures to reflect their facilities’ unique circumstances and risks, these agencies may not allocate security resources effectively, i.e., they may provide too much or too little protection at their facilities. Providing more protection at a facility than is needed may result in an unnecessary expenditure of government resources, while providing too little protection may leave a facility and its occupants vulnerable to attacks. For example, if an agency does not know its facility’s potential vulnerabilities to specific undesirable events, it cannot set priorities to mitigate them. In addition, we reported in 2012 that although federal agencies pay FPS millions of dollars to assess risk at their facilities, FPS’s interim facility assessment tool—the Modified Infrastructure Survey Tool (MIST)—was not consistent with federal risk assessment standards and had other limitations. Specifically, FPS’s risk assessment methodology was inconsistent with ISC’s risk assessment standards because it did not assess the consequence of possible undesirable events (i.e., the level, duration, and nature of loss resulting from undesirable events). FPS officials told us that MIST was not designed to assess consequence, and that adding this component would have required additional testing and validation. However, without a risk assessment tool that includes all three components of risk—threat, vulnerability, and consequence—as we have recommended, FPS has limited assurance that facility decision-makers can efficiently and effectively prioritize programs and allocate resources Furthermore, because to address existing and potential security risks.MIST also was not designed to compare risks across facilities, FPS has limited assurance that it prioritizes and mitigates critical risks within the agency’s portfolio of more than 9,600 federal facilities. This concludes our testimony. We are pleased to answer any questions you, Ranking Member Barber, and members of the Subcommittee might have. For further information on this testimony, please contact Mark Goldstein at (202) 512-2834 or by email at GoldsteinM@gao.gov. Individuals making key contributions to this testimony include Tammy Conquest, Assistant Director; Antoine Clark; Colin Fallon; Geoff Hamilton; Katherine Hamer; Sara Ann Moessbauer; Jaclyn Nidoh; and Travis Thomson. Federal Protective Service: Challenges with Oversight of Contract Guard Program Still Exist, and Additional Management Controls Are Needed. GAO-13-694. Washington, D.C.: September 2013. Facility Security: Greater Outreach by DHS on Standards and Management Practices Could Benefit Federal Agencies. GAO-13-222. Washington, D.C.: January 2013. Federal Protective Service: Actions Needed to Assess Risk and Better Manage Contract Guards at Federal Facilities. GAO-12-739. Washington, D.C.: August 2012. Homeland Security: Protecting Federal Facilities Remains a Challenge for the Department of Homeland Security’s Federal Protective Service. GAO-11-813T. Washington, D.C.: July 13, 2011. Federal Facility Security: Staffing Approaches Used by Selected Agencies. GAO-11-601. Washington, D.C.: June 2011. Homeland Security: Preliminary Observations on the Federal Protective Service’s Workforce Analysis and Planning Efforts. GAO-10-802R. Washington, D.C.: June 14, 2010. Homeland Security: Federal Protective Service’s Use of Contract Guards Requires Reassessment and More Oversight. GAO-10-614T. Washington, D.C.: April 14, 2010. Homeland Security: Federal Protective Service’s Contract Guard Program Requires More Oversight and Reassessment of Use of Contract Guards. GAO-10-341. Washington, D.C.: April 13, 2010. Homeland Security: Ongoing Challenges Impact the Federal Protective Service’s Ability to Protect Federal Facilities. GAO-10-506T. Washington, D.C.: March 16, 2010. Homeland Security: Greater Attention to Key Practices Would Improve the Federal Protective Service’s Approach to Facility Protection. GAO-10-142. Washington, D.C.: October 23, 2009. Homeland Security: Federal Protective Service Has Taken Some Initial Steps to Address Its Challenges, but Vulnerabilities Still Exist. GAO-09-1047T. Washington, D.C.: September 23, 2009. Homeland Security: Federal Protective Service Should Improve Human Capital Planning and Better Communicate with Tenants. GAO-09-749. Washington, D.C.: July 30, 2009. Homeland Security: Preliminary Results Show Federal Protective Service’s Ability to Protect Federal Facilities Is Hampered by Weaknesses in Its Contract Security Guard Program. GAO-09-859T. Washington, D.C.: July 8, 2009. Homeland Security: The Federal Protective Service Faces Several Challenges That Raise Concerns About Protection of Federal Facilities. GAO-08-897T. Washington, D.C.: June 19, 2008. Homeland Security: The Federal Protective Service Faces Several Challenges That Raise Concerns About Protection of Federal Facilities. GAO-08-914T. Washington, D.C.: June 18, 2008. Homeland Security: The Federal Protective Service Faces Several Challenges That Hamper Its Ability to Protect Federal Facilities. GAO-08-683. Washington, D.C.: June 11, 2008. Homeland Security: Preliminary Observations on the Federal Protective Service’s Efforts to Protect Federal Property. GAO-08-476T. Washington, D.C.: February 8, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
As part of the Department of Homeland Security (DHS), FPS is responsible for protecting federal employees and visitors in approximately 9,600 federal facilities under the control and custody of the General Services Administration (GSA). Recent incidents at federal facilities demonstrate their continued vulnerability to attacks or other acts of violence. To help accomplish its mission, FPS conducts facility risk assessments and provides oversight of approximately 13,500 contract security guards deployed to federal facilities. This testimony is based on the results of our September 2013 report (released by the Subcommittee today), previous reports, and preliminary results of work GAO conducted for a report that GAO plans to issue to the Chairman later this year. GAO discusses (1) challenges FPS faces in ensuring contract security guards deployed to federal facilities are properly trained and certified and (2) the extent to which FPS and select federal agencies' facility risk assessment methodologies align with standards issued by the ISC. To perform this work, GAO reviewed FPS and guard company documentation and interviewed officials about oversight of guards. GAO also reviewed FPS's and eight federal agencies' risk assessment documentation and compared it to ISC's standards. These agencies were selected based on their missions and types of facilities. The Federal Protective Service (FPS) faces challenges ensuring that contract guards have been properly trained and certified before being deployed to federal facilities around the country. In a September 2013 report, GAO found that providing active-shooter-response and screener training is a challenge for FPS. For example, according to officials at five guard companies, their contract guards have not received training on how to respond during incidents involving an active-shooter. Without ensuring that all guards receive this training, FPS has limited assurance that its guards are prepared for such a threat. Similarly, officials from one of FPS's contract guard companies stated that 133 (about 38 percent) of its approximately 350 guards have never received screener training. As a result, those guards may be using x-ray and magnetometer equipment at federal facilities that they are not qualified to use, raising questions about their ability to properly screen access control points at federal facilities--one of their primary responsibilities. We were unable to determine the extent to which FPS's guards have received active-shooter-response and screener training. FPS agreed with GAO's 2013 recommendation that they take steps to identify guards that have not had required training and provide it to them. GAO also found that FPS continues to lack effective management controls to ensure its guards have met its training and certification requirements. For instance, although FPS agreed with GAO's 2010 and 2012 recommendations that it develop a comprehensive and reliable system for managing information on guards' training, certifications, and qualifications, it still does not have such a system. Additionally, 23 percent of the 276 guard files GAO examined (maintained by 11 of the 31 guard companies we interviewed) lacked required training and certification documentation. Examples of missing items include documentation of initial weapons and screener training and firearms qualifications. GAO's preliminary results indicate that several agencies, including FPS, do not use a methodology to assess risk at their facilities that aligns with the Interagency Security Committee's (ISC) risk assessment standards. Risk assessments help decision-makers identify and evaluate security risks and implement protective measures to mitigate the risk. ISC's standards state that agencies' facility risk assessment methodologies must: 1) consider all of the undesirable events identified by ISC as possible risks to federal facilities, and 2) assess the threat, vulnerability, and consequence of specific undesirable events. Most commonly, agencies' methodologies that GAO reviewed are inconsistent with ISC's standards because they do not assess facilities' vulnerabilities to specific undesirable events. If an agency does not know its facilities' potential vulnerabilities to specific undesirable events, it cannot set priorities to mitigate these vulnerabilities. In addition, as GAO reported in August 2012, although federal agencies pay FPS millions of dollars to assess risk at their facilities, FPS's risk assessment tool is not consistent with ISC's risk assessment standards because it does not assess consequence (i.e., the level, duration, and nature of loss resulting from undesirable events). As a result, FPS and the other non-compliant agencies GAO reviewed may not have a complete understanding of the risks facing approximately 57,000 federal facilities located around the country (including the 9,600 protected by FPS). DHS and FPS agreed with GAO's recommendations in its September 2013 report.
Information security is a critical consideration for any organization reliant on IT and especially important for government agencies, where maintaining the public’s trust is essential. The Federal Information Security Management Act of 2002 (FISMA) established a framework designed to ensure the effectiveness of security controls over information resources that support federal operations and assets. According to FISMA, each agency is responsible, among other things, for providing information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of information collected or maintained by or on behalf of the agency and information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency. Consistent with its statutory responsibilities under FISMA, in February 2010, NIST issued Special Publication (SP) 800-37 on implementing effective risk management processes to (1) build information security capabilities into information systems through the application of management, operational, and technical security controls; (2) maintain awareness of the security state of information systems on an ongoing basis though enhanced monitoring processes; and (3) provide essential information to senior leaders to facilitate system authorization decisions regarding the acceptance of risk to organizational operations and assets, individuals, other organizations, and the nation arising from the operation and use of information systems. According to NIST guidance these risk management processes:  promote the concept of near real-time risk management and ongoing information system authorization through the implementation of robust continuous monitoring processes;  encourage the use of automation to provide senior leaders the necessary information to make cost-effective, risk-based decisions with regard to the organizational information systems supporting their core missions and business functions; integrate information security into the enterprise architecture and system development life cycle;  provide emphasis on the selection, implementation, assessment, and monitoring of security controls, and the authorization of information systems; link risk management processes at the information system level to risk management processes at the organization level through a risk executive (function); and  establish responsibility and accountability for security controls deployed within organizational information systems and inherited by those systems (i.e., common controls). Continuous monitoring of security controls employed within or inherited by the system is an important aspect of managing risk to information from the operation and use of information systems. Conducting a thorough point-in-time assessment of the deployed security controls is a necessary but not sufficient practice to demonstrate security due diligence. An effective organizational information security program also includes a rigorous continuous monitoring program integrated into the system development life cycle. The objective of continuous monitoring is to determine if the set of deployed security controls continue to be effective over time in light of the inevitable changes that occur. Such monitoring is intended to assist maintaining an ongoing awareness of information security, vulnerabilities, and threats to support organizational risk management decisions. The monitoring of security controls using automated support tools facilitates near real-time risk management. As described in the draft NIST SP 800-137, the monitoring process consists of the following various steps:  defining a strategy;  establishing measures and metrics;  establishing monitoring and assessment frequencies; implementing the monitoring program;  analyzing security-related information and reporting findings; responding with mitigation actions or rejecting, avoiding, transferring, or accepting risk; and reviewing and updating the monitoring strategy and program. In its September 2010 report, Continuous Asset Evaluation, Situational Awareness, and Risk Scoring (CAESARS) Reference Architecture Report, the Department of Homeland Security (DHS) indicates that a key aspect of a continuous monitoring process is analyzing security-related information, defining and calculating risk, and assigning scores. The report notes that risk scoring can provide information at the right level of detail so that managers and system administrators can understand (1) the state of the IT systems for which they are responsible, (2) the specific gaps between actual and desired states of security protections, and (3) the numerical value of every remediation action that can be taken to close the gaps. This information should help enable responsible managers to identify actions that can add value to improving security. The report also notes that risk scoring is not a substitute for other essential operational and management controls, such as incident response, contingency planning, and personnel security. When used in conjunction with other sources of information, such as the Federal Information Processing Standards 199 security categorization and automated asset data repository and configuration management tools, risk scoring can be an important contributor to an overall risk management strategy. NIST is in the process of developing guidance that extends the CAESARS framework provided by DHS. NIST’s extension is to provide information on an enterprise continuous monitoring technical reference architecture to enable organizations to aggregate collected data from security tools, analyze the data, perform scoring, enable user queries, and provide overall situational awareness. NIST has also emphasized the value of planning, scheduling, and conducting assessments of controls as part of a continuous monitoring program in SP 800-37. This program allows an organization to maintain the security authorization of an information system over time in a highly dynamic environment of operation with changing threats, vulnerabilities, technologies, and missions or business processes. Continuous monitoring of security controls using automated support tools facilitates near real-time risk management and promotes organizational situational awareness with regard to the security state of the information system. State’s key missions are to (1) strive to build and maintain strong bilateral and multilateral relationships with other nations and international organizations; (2) protect the nation against the transnational dangers and enduring threats arising from tyranny, poverty, and disease, global terrorism, international crime, and the spread of weapons of mass destruction; and (3) combine diplomatic skills and development assistance to foster a more democratic and prosperous world integrated into the global economy. To accomplish its missions, State operates more than 260 embassies, consulates, and other posts worldwide. In addition, the department operates 6,000 passport facilities nationwide, 17 domestic passport agencies, 2 foreign press centers, 1 reception center, 5 offices that provide logistics support for overseas operations, 20 security offices, and 2 financial service centers. State is organized into nine functional bureaus: the Bureaus of Administration, Consular Affairs, Diplomatic Security, Resource Management, Human Resources, Information Resource Management, and Overseas Buildings Operations; the Office of the Legal Adviser; and the Foreign Service Institute. Among other things, these functional bureaus provide services such as policy guidance, program management, and administrative support. In addition, State has six regional, or geographic bureaus including the Bureau of African Affairs, East Asian and Pacific Affairs, European and Eurasian Affairs, Western Hemisphere Affairs, Near Eastern Affairs, and South Asian Affairs. These bureaus focus on U.S. foreign policy and relations with countries within their geographical areas. State’s IT infrastructure, encompassing its worldwide computer and communications networks and services, plays a critical role in supporting the department’s missions. This includes OpenNet—the department’s global unclassified network that uses Internet protocol to link State’s domestic and local area networks abroad. OpenNet serves both foreign and domestic locations, has tens of thousands of hosts, and about 5,000 routers and switches. The department budget for IT was approximately $1.2 billion for fiscal year 2010. The department’s Foreign Affairs Manual assigns the following roles and responsibilities for IT to the Bureau of Information Resource Management and Bureau of Diplomatic Security:  The Bureau of Information Resource Management, headed by the Chief Information Officer (CIO), is to support the effective and efficient creation, collection, processing, transmission, dissemination, storage, and disposition of information required to formulate and execute U.S. foreign policy and manage the department’s daily operations. To meet the challenges of providing information in such an environment, the bureau relies on IT to disseminate this information throughout the foreign affairs community.  The Bureau of Diplomatic Security has global responsibilities, with protection of people, information, and property. Overseas, the bureau implements security programs to ensure the safety of those who work in every U.S. diplomatic mission. In the U.S., the bureau protects the Secretary of State, the U.S. Ambassador to the United Nations, and foreign dignitaries who visit the United States. It also investigates passport and visa fraud, conducts personnel security investigations, and issues security clearances. Additional IT-relevant functions it performs are network monitoring and intrusion detection, incident handling and response, and threat analysis. The Foreign Affairs Manual also assigns roles and responsibilities to various department officials for information security. These roles and responsibilities are summarized in the following table. State has developed and implemented a complex, custom-made application called iPost to provide an enhanced monitoring capability for its extensive and worldwide IT infrastructure. The source data for iPost come from a variety of enterprise management and monitoring tools including Active Directory (AD), Systems Management Server (SMS), and diagnostic scanning tools. These tools provide vulnerability data, security compliance data, anti-virus signature file data, and other system and network data to iPost. The data are posted to an iPost database, reformatted and reconciled, and then populated into other iPost databases. Data are associated with a “site” or “operational unit,” and integrated into a single user interface portal (or dashboard) that facilitates monitoring by department users. The primary users of iPost include local and enterprise IT administrators and their management. Designed specifically for State, iPost provides summary and detailed data as well as the capability to generate reports based on these data. Summary information provides an overview of the current status of hosts at a site, including summary statistics and network activity information. Detailed data on hosts within a site are also available through the application navigation. For example, when looking at data about a specific patch, a user can see which hosts need that patch. Users can select a specific host within the scope of their control to view all the current data iPost has for that host, such as all identified vulnerabilities. Examples of key iPost screens and reports for sites are provided in appendix II. State also developed and incorporated a risk scoring program into iPost that is intended to provide a continuous risk monitoring capability over its Windows-based hosts on the OpenNet network at domestic and overseas locations. The program uses data integrated into iPost from several monitoring tools to produce what is intended to be a single holistic view of technical vulnerabilities. The objectives of the program are to measure risk in multiple areas, motivate administrators to reduce risk, measure improvement, and provide a single score for each host, site, and the enterprise. Each host and user account is scored in multiple areas known as scoring components. The scoring program assigns a score to each vulnerability, weakness, or other infrastructure issue identified for the host based on the premise that a higher score means higher risk. Thus, the score for a host is the total of the scores of all its weaknesses. Scores are then aggregated across components to give a total or “raw” risk score for each host, site, region, or the enterprise. Scores are “normalized” so that small and large sites can be equitably compared. Letter grades (“A” through “F”), based on normalized scores, are provided to both administrators and senior management with the intent of encouraging risk reduction. The scoring program also has an “exception” process that aims to accommodate anomaly situations where the risk cannot be reduced by local administrators because of technical or organizational impediments beyond local control. In such cases, the risk score is to be transferred to the site or operational unit that has responsibility for mitigating the weakness and local administrators are left to address only those weaknesses within their control. According to a State official, summary data (scores by site and component) are permanently retained in a database while detailed data were generally retained until replaced by updated data from a recent scan. In instances when a host is missed on a scan, the older detailed data are kept until they are judged to be too old to be useful. After that, the host is scored for nonreporting, and the older data are deleted. The official also noted that under a new policy being implemented, detailed data will be retained for two to three scans so that users at a site can see what changed. State has been recognized as a leader in federal efforts to develop and implement a continuous risk monitoring capability. In its CAESARS reference architecture report, DHS recognized State as a leading federal agency and noted that DHS’s proposed target-state reference architecture for security posture monitoring and risk scoring is based, in part, on the work of State’s security risk scoring program. In addition, in 2009 the National Security Agency presented an organizational achievement award to State’s Site Risk Scoring Program team for significantly contributing to the field of information security and the security of the nation. The iPost risk scoring program identifies and prioritizes several but not all areas affecting information security risk to State’s IT infrastructure. Specifically, the scope of the iPost risk scoring program:  addresses Windows hosts but not other IT assets on the OpenNet network, such as routers and switches; covers a set of 10 scoring components that includes several but not all information system controls that are intended to reduce risk; and  assigns a score for each identified security weakness, but the extent to which the score reflects risk factors such as the impact and likelihood of threat occurrence that are specific to State’s computing environment could not be demonstrated. As a result, the iPost risk scoring program helps to identify, monitor, and prioritize mitigation of vulnerabilities and weaknesses for the areas it covers, but it does not provide a complete view of the information security risks to the department. The scope of State’s risk scoring program covers hosts that use Windows operating systems, are members of AD, and are attached to the department’s OpenNet network. This includes approximately tens of thousands workstations and servers at foreign and domestic locations. However, the program’s scope does not include other devices attached to the network such as those that use non-Windows operating systems, firewalls, routers, switches, mainframes, databases, and intrusion detection devices. Vulnerabilities in controls for these devices could introduce risk to the Windows hosts and the information the hosts contain or process. State officials indicated that the focus on Windows hosts for risk scoring was due, in part, because of the desire to demonstrate success of the risk scoring program before considering other types of network devices. Windows servers and workstations also comprised a majority of the devices attached to the network and the availability of Microsoft tools such as AD and SMS and other enterprise management tools facilitated the collection of source data from Windows hosts. State officials indicated they were considering expanding the program to include scoring other devices on OpenNet. In applying the risk management framework to federal information systems, agencies select, tailor, and supplement a set of baseline security controls using the procedures and catalogue of security controls identified in NIST SP 800-53, rev. 3. The effective implementation of these controls is intended to cost-effectively mitigate risk while complying with security requirements defined by applicable laws, directives, policies, standards, and regulations. To ensure that the set of deployed security controls continues to be effective over time in light of inevitable changes that occur, NIST SP 800-37 states that agencies should assess and monitor a subset of security controls including technical, management, and operational controls on an ongoing basis during continuous monitoring. Using data integrated into iPost from multiple monitoring tools that identify and assess the status of security-related attributes and control settings, the iPost risk scoring program supports a capability to assess and monitor a subset of the security controls including technical and operational controls on an ongoing basis. The program is built on a set of 10 scoring components, each of which, according to iPost documentation, represents an area of risk for which measurement data were readily available. The program addresses vulnerabilities, security weaknesses, and other control issues affecting risk to the Windows hosts. The 10 scoring components in iPost are described in the following table. Although iPost provides a capability to monitor several types of security controls on an ongoing basis, it did not address other controls intended to reduce risk to Windows hosts, thereby providing an incomplete view of such risk. These controls include physical and environmental protection, contingency planning, and personnel security. Vulnerabilities in these controls could introduce risk to the department’s Windows hosts on OpenNet. State officials recognized that these controls and associated vulnerabilities were not addressed in iPost and stated that when they were first developing iPost, they focused on controls and vulnerabilities that could be monitored with existing automated tools such as a scanning tool, AD, and SMS since these could be implemented immediately. State officials believed this approach allowed them to develop a continuous monitoring application in the time frame they did with the limited resources available. Department officials also advised that the scoring program is intended to be scalable to address additional controls and they may add other control areas in the future. According to NIST SP 800-37, risk is a measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (1) the adverse impacts that would arise if the circumstance or event occurs and (2) the likelihood of occurrence. In information assurance risk analysis, the likelihood of occurrence is a weighted factor based on a subjective analysis of the probability that a given threat is capable of exploiting a given vulnerability. According to iPost documentation, a key objective of the risk scoring program is to measure risk in multiple areas. State could not demonstrate the extent to which it considered factors relating to threat, impact, and likelihood of occurrence in assigning risk scores for security weaknesses. In developing the scoring methods for the 10 scoring components, the department utilized a working group comprised of staff from the Bureaus of Information Resource Management and Diplomatic Security. While documentation was limited to descriptions of the certain scoring calculations assigned to each component, State officials explained that working groups comprised of staff from the Bureaus of Information Resource Management and Diplomatic Security had discussions to determine a range of scores for each component. State officials explained that the premise for the scoring method was the greater the risk, the higher the score, and therefore, the greater the priority for mitigation. However, minutes of the working groups’ meetings and other documents did not show the extent to which threats, the potential impacts of the threats, and likelihood of occurrence were considered in developing the risk scores and State officials acknowledged these factors were not fully considered. Table 3 provides a description of how State calculates a score for each component. The methodology used to assign scores for the vulnerability component illustrates the limits that risk factors such as the impact and likelihood of threats specific to State’s environment were considered. Each vulnerability is initially assigned a score according to the Common Vulnerability Scoring System (CVSS). According to NIST guidance, agencies can use the CVSS base scores stored in the National Vulnerability Database to quickly determine the severity of identified vulnerabilities. Although not required, agencies can then refine base scores by assigning values to the temporal and environmental metrics in order to provide additional contextual information that more accurately reflects the risk to their unique environment. However, State did not refine the base scores to reflect the unique characteristics of its environment. Instead, it applied a mathematical formula to the base scores to provide greater separation between the scores for higher-risk vulnerabilities and the scores for lower-risk vulnerabilities. As a result, the scores may not fully or accurately reflect the risks to State’s OpenNet network. Although the iPost risk scoring program does not provide a complete view of the information security risks to the department, it helps to identify, monitor, and prioritize mitigation of vulnerabilities and weaknesses associated with Windows hosts on the OpenNet network. State officials surveyed responded that they used iPost to (1) identify, prioritize, and fix security weaknesses and vulnerabilities on Windows devices and (2) implement other security improvements at their sites. For example, at least half of the respondents said that assigning a numeric score to each vulnerability identified and each component was very helpful with prioritizing their efforts to prioritize the mitigation of Windows vulnerabilities. State officials stated that iPost was particularly helpful because prior to iPost, officials did not have access to tools with these capabilities. However, State officials did not use iPost results to update key security documents related to the assessment and authorization of the OpenNet network. State officials reported they used iPost to help them to: (1) identify Windows vulnerabilities on the devices for which they were responsible, (2) prioritize the mitigation of vulnerabilities identified, and (3) fix the vulnerability and confirm mitigation was successfully implemented. Specifically, as part of their duties, State officials indicated they reviewed iPost regularly to see the results of the automated scanning of devices at their sites to see what vulnerabilities had been identified. In particular, 14 of 40 survey respondents stated that they viewed the information in iPost at least once per day, 17 viewed information in iPost at least once a week, 3 viewed information at least once a month, and 4 respondents viewed information less than once per month. In addition, State officials we interviewed indicated they reviewed iPost information on a daily basis, with one official stating it was his first task in the morning. Of the information available in iPost, State officials surveyed noted that some screens and information were particularly useful at their sites. Specifically, the majority of the 40 survey respondents reported that the site summary screen, site score summary screen, level of detailed information on each component, and site reports were very or moderately useful (see fig. 1). These screens show site and host identifying information, statistical data, and graphical representations of the site’s risk scores, host computers, accounts, and identified weaknesses for each of the 10 components. Appendix II shows sample screens containing this information. The majority of State officials surveyed also indicated that iPost was very helpful in identifying Windows vulnerabilities. In particular, the majority of the 40 survey respondents indicated that iPost was very or moderately helpful in identifying vulnerabilities on devices, providing automated scanning of devices onsite for vulnerabilities, and reviewing identified vulnerabilities on devices (see fig. 2). Furthermore, survey respondents and State officials we interviewed also reported being able to identify additional site vulnerabilities at their sites not scored in iPost. For example, one official we spoke to said she would receive incident notices and would use iPost to obtain more information about the incident. Another official noted that iPost helped identify users who were utilizing the most bandwidth on the network. Generally, State officials concluded that iPost was particularly helpful because (1) it provided several officials access to tools with these capabilities they did not have prior to its use and (2) it streamlined the number of software utility and scanning tools officials could use, making the monitoring process more efficient and effective. State officials reported that the iPost features helped them prioritize the mitigation of vulnerabilities at their sites. Most survey respondents indicated that iPost was very or moderately helpful with prioritizing the mitigation of Windows vulnerabilities. For example, more than half of the 40 respondents said that assigning a numeric score to each vulnerability identified and each component was very or moderately helpful in their efforts to prioritize vulnerability mitigation. In addition, over half of the 40 respondents felt that assigning letter grades to sites was very or moderately helpful in prioritization efforts, though 10 respondents felt this was only slightly helpful, and 4 respondents felt this was not helpful at all. Of the features presented in iPost that assist in prioritization, responses were mixed regarding how helpful ranking of sites in comparison to other sites was for prioritizing vulnerability mitigation, with 22 respondents reporting it was very or moderately helpful, 9 slightly helpful, and 7 not at all helpful. Figure 3 provides details of survey responses. State officials we interviewed also indicated that iPost assisted them in prioritizing vulnerability mitigation. In particular, they found that scoring the vulnerabilities helped them to identify which ones were necessary to fix first. In regards to the letter grades and ranking, a State official told us the letter grades were useful because they aided him in deciding whether he should fix vulnerabilities identified in iPost (if he/she had a grade lower than an A) or focus on other activities. The iPost dashboard also provides links to available resources that users can utilize to fix identified vulnerabilities. Table 4 provides an overview of available resource links located in iPost. State officials reported they used available resources linked in iPost to help them fix vulnerabilities at their sites and confirm those fixes were successfully implemented. In particular, the majority of survey respondents reported that the patch management Web site, the SMS post admin tool, and the IT Change Control Board baseline were very or moderately useful in helping them to fix vulnerabilities at their site. Over half of the 40 respondents stated that the Site Risk Scoring Toolkit (26), IT Asset baseline (25), and the Diplomatic Security configuration guides (24) were very or moderately useful in helping them to fix vulnerabilities at their site. However, officials also reported they had never used some of the resources available in iPost (see fig. 4). In addition, State officials mentioned they utilized iPost to confirm that fixes they made to identified vulnerabilities were successfully implemented. In particular, survey respondents reported they either waited for the next automated scan results to be posted in iPost (28 of 31 respondents) or e-mailed headquarters in Washington, D.C., to run another scan to see that the fix was implemented (9 of 31 respondents). Regarding the helpfulness of iPost in verifying vulnerability fixes were successfully implemented, survey respondents found iPost to be very helpful (17 respondents), moderately helpful (12 respondents), slightly helpful (5 respondents) or not at all helpful (2 respondents). State officials surveyed reported that using iPost also influenced them to make other security improvements at their sites. For example, of the 24 respondents who reported updating AD at their site, 17 respondents reported they were influenced by using iPost to do so, and of the 20 respondents that reported changing how patches were rolled out, 16 reported that iPost influenced them in making this change. In addition, several respondents reported making security improvements in configurations of servers, site security policies, security training, and network architecture based in part on their use of iPost (see fig. 5). For example, one survey respondent reported that since the desktops shipped to the site had obsolete software on the standardized baseline image, he/she removed the old software before deploying the workstations at the site. Another survey respondent reported that he/she looked at iPost to see whether deployed patches needed to be pushed out again or if they needed to be installed manually. NIST SP 800-37 states that continuous monitoring results should be considered with respect to necessary updates to an organization’s security plan, security assessment report, and plan of action and milestones, since these documents are used to guide future risk management activities. The information provided by these updates helps to raise awareness of the current security state of the information system and supports the process of ongoing authorization and near real-time risk management. However, State did not incorporate the results of iPost continuous risk monitoring activities into the OpenNet security plan, security assessment report, and plan of action and milestones on an ongoing basis. For example, plans of action and milestones were not created or updated for guiding and monitoring the remediation of vulnerabilities deemed to be exceptions. Thus, key information needed for tracking and resolving exceptions was not readily available. As a result, the department may limit the effectiveness of those documents in guiding future risk management activities. Organizations establish controls to provide reasonable assurance that their data are timely, free from significant error, reliable, and complete for their intended use. According to Standards for Internal Control in the Federal Government, agencies should employ a variety of control activities suited for information systems to ensure the accuracy and completeness of data contained in the system and that the data is available on a timely basis to allow effective monitoring of events and activities, and to allow prompt reaction. These controls can include validating data; reviewing and reconciling output to identify erroneous data; and reporting, investigating, and correcting erroneous data. These controls should be clearly documented and evaluated to ensure they are functioning properly. NIST SP 800-39 also states that the processes, procedures, and mechanisms used to support monitoring activities should be validated, updated, and monitored. According to the Foreign Affairs Manual, stakeholders, system owners, and data stewards must ensure the availability, completeness, and quality of department data. State has developed and implemented several controls that are intended to ensure the timeliness, accuracy, and completeness of iPost data. For example, State has employed the use of automated tools to collect monitoring data that are integrated into iPost. The use of automated tools is generally faster, more efficient, and more cost-effective than manual collection techniques. Automated monitoring is also less prone to human error. State also has used data collection schedules that support the frequent collection of monitoring data. For example, every Windows host at each iPost site is to be scanned for vulnerabilities every 7 days. The frequent collection of data helps to ensure its timeliness. In addition, State has established three scoring components—SMS Reporting, Vulnerability Reporting, and Security Compliance Reporting—in its risk scoring program to address instances when data collection tools do not correctly report the data required to compute a score for a component, such as when a host is not scanned. To illustrate, a host is assigned a score for the Vulnerability Reporting component if it misses two or more consecutive vulnerability scans (that is, the host is not scanned in 15 days). Intended to measure the risk of the unknown according to iPost documentation, this scoring method also serves as a control mechanism for identifying and monitoring hosts from which data were not collected in accordance with departmental criteria. State officials also advised that they had conducted a pilot program for the risk scoring program that enabled site users to (1) review the results of the data collections and associated scoring of the weaknesses and (2) report any inaccuracies they observed. State then identified solutions for the inaccuracies observed. Although the pilot was completed in April 2009, State officials noted they continue to rely on iPost users to report missed scans and inaccurate or incomplete data observed. Notwithstanding these controls, the timeliness, accuracy, and completeness of iPost data were not always assured. For example, several instances where iPost data were not updated as frequently as scheduled, inconsistent, or incomplete are illustrated below.  Frequency of updates to iPost data supports federal requirements but vulnerability scanning was not conducted as frequently as State scheduled. FISMA requires that agencies conduct periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, to be performed with a frequency based on risk but no less frequent than annually. According to iPost documentation, each host is to be scanned for vulnerabilities every 7 days. However, a review of scanning data for 15 sites (or 120 weekly scans) during an 8-week period in summer 2010 revealed that only 7 percent of the weekly scans successfully checked all Windows hosts at the site scanned. While 54 percent of the weekly scans successfully checked between 80 percent to 99 percent of the site’s Windows hosts, 28 percent of the scans checked less than 40 percent of a site’s hosts. Ten sites experienced at least one weekly scan cycle during which none of their hosts were scanned for vulnerabilities and a rescheduled scan was also not performed. According to iPost documentation, a host may not have been scanned because the host was powered off when the scan was attempted, the host’s Internet protocol address was not included in the range of the scan, or the scanning tool did not have sufficient permissions to scan the host. Although the frequency of updates to iPost data supports State’s efforts to satisfy FISMA’s requirement for periodic testing and evaluation, the updates to vulnerability information in iPost were not as timely as intended. As a result, iPost users may base risk management decisions on incomplete data.  Data from vulnerability scans were sometimes not uploaded to iPost in a timely manner. State officials stated that vulnerability scanning results are typically presented in the iPost staging database at least 1 day following the scan. However, the length of time it took for scan results to be uploaded into iPost was not consistent across all sites and impacted the scoring results for certain sites. The scanning results for the majority of 15 sites reviewed were typically presented in iPost at least 1 day following the scan, although it took up to 3 days for certain foreign and domestic sites. According to State officials, delays with uploading information to iPost for certain geographical areas around the world occurred because of the network’s architecture. As a result, numerous Windows hosts at 6 of the 15 sites reviewed received scores in iPost’s vulnerability reporting component for missing two consecutive vulnerability scans even though the hosts had been scanned. Consequently, iPost users may make risk management decisions based on inaccurate or incomplete data.  Data presented in iPost about the number of hosts addressed were sometimes inconsistent. According to State officials, the information in iPost-generated reports should reflect the information displayed on iPost screens; however, information presented about the number of hosts was sometimes inconsistent. For example, the number of hosts that were not scanned for security compliance differed between the iPost reports and the site summary screens for each of the 15 sites reviewed. According to a State official, the summary screen displayed number of hosts that were not scanned over two weekly cycles whereas the iPost reports presented the number of hosts that were not scanned during the current weekly cycle but iPost did not clearly label the data elements accordingly. In addition, several iPost reports generated on the same day at one site showed a different number of hosts for which SMS did not report data. iPost-generated enterprise reports also varied in terms of the total number of hosts being monitored and scored, ranging from approximately 84,000 to 121,000. As a result, iPost users may base risk management decisions on inconsistent or inaccurate data. Several factors contributed to the conditions described above. Technical limitations of the data collection tools contributed to missed scans. For example, the diagnostic scanning tool used by State performed agentless network scans of specific Internet protocol address ranges, so hosts that are powered off during the scan or are not included in the address range during the scan are missed. State officials were aware of the limitations with the scanning tool and indicated they had taken steps to address them. Specifically, State acquired and was implementing a new diagnostic tool based on agent technology to collect vulnerability and security compliance data. Also, iPost did not retain detailed data from multiple cycles of scans of hosts at a site over an extended period since the data was overwritten by new scan results or was deleted. As a result, iPost users could not conduct trend analyses to determine the extent to which scans were successfully completed as scheduled or that data are accurately and consistently presented in iPost screens and reports. State recognized the importance of having detailed historical scan data. As noted earlier, a State official stated that a new policy was being implemented that requires detailed data be retained for two to three scans so that users at a site can see what changed. In addition, State had not adequately documented all of the controls in place for ensuring the timeliness, accuracy, and completeness of data and based on our review, we could not determine if all of the stated controls were in place or working as intended. Further, State had not implemented formal procedures for systematically validating data and reviewing and reconciling output in iPost on an ongoing basis to detect and correct inconsistent and incomplete data, which State officials confirmed were not in place. Developing, documenting, and implementing these procedures and controls, and ensuring that they are working as intended, can provide increased assurance that information displayed in iPost is consistent, accurate, and complete. State’s implementation of iPost has resulted in improvements to the department’s information security by providing extensive and timely information on vulnerabilities and weaknesses on Windows servers and workstations, while also creating an environment where officials are motivated to fix vulnerabilities based on department priorities. However, State has faced, and will continue to face, challenges in implementing iPost. These challenges include overcoming limitations and technical issues with data collection tools, identifying and notifying individuals with responsibility for site-level security, implementing configuration management, and adopting a continuous monitoring strategy for moving forward in incorporating additional functionality into iPost. The implementation of iPost has enhanced information security at the department by offering a custom application with a common methodology for data collection, analysis, and reporting of information that security officers and system administrators can use to find extensive information on the security of Windows hosts that they are responsible for and fix specified vulnerabilities. For example, information in iPost allows users to:  obtain a quick visual overview of compliance, vulnerability, patch, antivirus, and other component status for Windows hosts via the site summary report;  access information about the status of security controls to determine the extent to which the controls are implemented correctly; and  determine which hosts were scanned or not scanned and when this occurred. iPost and the risk scoring program have also facilitated the identification of other potential security problems since users could make connections between pieces of data to find possible trends or patterns. For example, one official responded in the survey that he was able to identify a network performance problem by reviewing data available on the iPost portal and as a result, increase the data transmission rates over the network for his site. In addition, since regional and enterprise managers have access to iPost data for sites for the region or enterprise, they have increased awareness of security issues at specific sites and across the enterprise, allowing department officials a common language with which to discuss vulnerabilities and make decisions regarding their mitigation. Moreover, the inclusion of a scoring approach, with associated ranking of sites and letter grades within iPost, has created a mechanism for the department chief information security officer to use in conveying to system administrators the department’s priorities in addressing the mitigation of identified vulnerabilities or implementation of particular patches, among other things. The scoring method has motivated security officers and system administrators to focus on the vulnerabilities that have been given the highest scores first and mitigate these weaknesses on affected machines. This approach also allows regional and enterprise officials who review the letter grades and rankings to identify sites where improvements need to be made. Having this capability has enabled the department to respond to emerging threats associated with vulnerabilities in commercial products that occurred over the past year. iPost implementation has also enhanced information security at State because having a continuous monitoring program in place provides information on weaknesses affecting Windows devices. In particular, controls on these devices are assessed more often than the testing and evaluations of controls that are performed as part of certification and accreditation of OpenNet every 3 years. By taking steps to implement continuous monitoring through iPost, State has been able to obtain information on vulnerabilities and weaknesses affecting tens of thousands of Windows devices on its OpenNet network every couple of days, weekly, or biweekly. Having this type of capability has enabled department officials to identify vulnerabilities and fix them more rapidly than in years prior to iPost implementation. Limitations in the capabilities of commercially available tools and technical issues with the tools used to collect data on vulnerabilities created challenges for State implementing a continuous monitoring program. State officials stated that when they initially began to conceptualize the application there were no commercial products available with the functionality and capabilities needed, so they developed iPost with the assistance of contractors. There were challenges involved with iPost’s development, including resolving the technical issues with using scanning tools and displaying the results obtained from various data collection tools that had different data file formats. For example, State officials identified the following technical issues with the data collection tools:  Certain tools did not always check each control setting as expected, did not always scan hosts when scheduled, or created false positives that had to be analyzed and explained.  A vendor did not consistently keep its scanning tool up to date with the common vulnerabilities and exposures from the National Vulnerability Database.  Scanning tools of different vendors used different approaches for scoring groups of vulnerabilities, so when the agent software scanner of a new vendor was implemented, State had to curve scores so that the disparities did not penalize the site. Another challenge with running scans is that scanning tools do not have the capability to scan tens of thousands of hosts at one time without significant network performance degradation. Therefore, the department has had to establish scanning schedules so all hosts can be scanned over a period of time. State officials stated they had taken steps to address these challenges by working with a vendor to enhance its data collection tool and selecting an alternate tool when appropriate. In addition, department officials stated they were working with other agencies and a contractor to develop additional capabilities that better meet their needs. Building these relationships could benefit the department as it moves forward in monitoring additional controls and developing additional capabilities in iPost. According to Standards for Internal Control in the Federal Government, authority and responsibility should be clearly assigned throughout the organization and clearly communicated to all employees. Responsibility for decision making is clearly linked to the assignment of authority, and individuals are held accountable accordingly. iPost generally identified the local administrator(s) for each Windows host who would generally have the access permissions necessary to resolve nonexception weaknesses on the host. However, iPost did not identify the individual or contact point at each site or operational unit who had site- level responsibility for reviewing iPost site reports, monitoring the security state of the site’s hosts, and ensuring that the control weaknesses identified on all hosts at the site were resolved. In particular, there was confusion at the department as to who was responsible for operational units when this information was requested, and the information that was subsequently provided was inaccurate for several units. As a result, the department has reduced assurance that responsibility for monitoring the security state of and resolving known weaknesses on a site’s Windows hosts is clearly conveyed. In addition, departmental officials did not always notify senior managers at sites with low security grades of the need to fix security weaknesses. According to State officials, operational units in iPost with grades C- or below for 3 consecutive months are to receive warning letters indicating the need to improve their grades. From April 2009 to March 2010, 62 out of 483 sites received letters noting the need for improvement; however, 6 additional sites should have received letters but did not. In addition, 33 sites that received at least one warning letter should have received one or more additional warnings for months with low grades but did not. As a result, senior managers may not have been fully aware of the security state of Windows hosts at sites they oversee. According to the Foreign Affairs Handbook, the development of new IT services, systems and applications, and feature and maintenance enhancements are to follow the guidance outlined in the Foreign Affairs Manual. The Foreign Affairs Manual states that configuration management plans should be developed for IT projects and identify the system configuration, components, and the change control processes to be put in place. Effective configuration management also includes a disciplined process for testing configuration changes and approving modifications, including the development of test plan standards and documentation and approval of test plans, to make sure the program operates as intended and no unauthorized changes are introduced. State had not fully implemented configuration management for iPost. Although the department had maintained release notes on updates, scoring documents, and presentations on iPost, key information about the program and its capabilities was not fully documented. For example, there were no diagrams of the architecture of iPost or a configuration baseline. In addition, there was no documentation of appropriate authorization and approval of changes included in iPost updates. Furthermore, although State improved its process for testing applications and subsequent versions of iPost from a manual and informal testing process in April 2010, it still lacks a written test plan and acceptance testing process with new releases being approved prior to release. For example, test procedures were not performed or documented to ensure that scripts for applying scoring rules matched the stated scoring methodology and that the scoring scripts were sufficiently tested to ensure that they fulfilled State’s intended use. As the department moves forward with implementation of additional capabilities for iPost, the need for a robust configuration management and testing process increases. Until such a process is fully developed, documented, and maintained, State has reduced assurance that iPost is configured properly and updates or changes to the application and scoring rules are working as intended. According to NIST, as part of a risk management framework for federal information systems, a strategy for the selection of appropriate security controls to monitor and the frequency of monitoring should be developed by the information system owner and approved by the authorizing official and senior information security officer. Priority for selection of controls to monitor should be given to controls that are likely to change over time. In addition, the security status of the system should be reported to the authorizing official and other appropriate organizational officials on an ongoing basis in accordance with the monitoring strategy. The authorizing official should also review the effectiveness of deployed controls on an ongoing basis to determine the current risk to organizational operations and assets. According to the Foreign Affairs Manual, risk management personnel should balance the tangible and intangible cost to the department of applying security safeguards against the value of the information and the associated information system. While State has reported success with implementing iPost to provide ongoing monitoring of certain controls over Windows hosts on OpenNet and reporting the status of these controls across the enterprise to appropriate officials, the department faces an ongoing challenge in continuing this success because it does not have a documented continuous monitoring strategy in place. Although the department began continuous monitoring before applicable detailed federal guidance was available and selected controls to monitor based on the capabilities of existing data collection tools, the department has not re-evaluated the controls monitored to determine whether the associated risk has changed. In addition, although department officials reported they were working to implement additional controls, there was no documentation to indicate whether the department had weighed the associated risk and the tangible and intangible costs associated with implementation when selecting which controls they intended to monitor. Furthermore, the frequency of how often the security status of the Windows hosts should be reported to the authorizing official and other appropriate officials was not documented. Therefore, until the department develops, documents, and implements a continuous monitoring strategy, the department may not have sufficient assurance that it is effectively monitoring the deployed security controls and reporting the security status to designated officials with sufficient frequency. Leading practices for program management, established by the Project Management Institute in The Standard for Program Management, state that the information that program stakeholders need should be made available in a timely manner throughout the life cycle of a program. In addition, Standards for Internal Control in the Federal Government states that information should be communicated to management and others within the agency who need it. Management should also ensure there are adequate means of communicating with external stakeholders that may have a significant impact on the agency achieving its goals. A further ongoing challenge for the department is understanding and managing internal and external stakeholders’ expectations for continuous monitoring activities in terms of what goals and objectives can reasonably be achieved with iPost. These expectations include:  Lowering scores in iPost always implies that risks to the individual sites are decreasing. With the current scoring approach used in iPost, lowering a score may imply that the associated risks to the site are being lowered as well, but there may be other reasons for the score being adjusted that are not related to mitigating the risk to particular hosts or sites. In particular, State officials have reported that (1) curving of the scores is performed in order to promote fairness; (2) exceptions are granted which shift the score from one operational unit to another; and (3) moving responsibility for hosts at overseas units to domestic units, which adjusts the scores accordingly. State officials should be careful in conveying to managers who make decisions based on scores and grades that lowering of scores in iPost doesn’t necessarily indicate that risks to the department are decreasing.  Having continuous monitoring may replace the need for other assessment and authorization activities. According to NIST, a well designed and managed continuous monitoring program can transform an otherwise static and occasional security control assessment and risk determination process that is part of periodic assessment and authorization into a dynamic process that provides essential, near, real-term security status-related information. However, continuous monitoring does not replace the explicit review and acceptance of risk by an authorizing official on an ongoing basis as part of security authorization and in itself, does not provide a comprehensive, enterprisewide risk management approach. In addition, since continuous monitoring may identify risks associated with control weaknesses on a frequent basis, there may be instances where the problem cannot be fixed immediately, such as cases where State granted exceptions for weaknesses for periods of a year or more. There will need to be a mechanism in place for the designated authority to approve the associated risks from granting these exceptions. As the department moves forward with implementation of additional capabilities, it will be important to recognize the limitations of continuous monitoring when undertaking these efforts. State officials confirmed that managing stakeholder expectations, in particular external stakeholders, had been a challenge. The Chief Information Security Officer (CISO) stated that the department was attempting to address these expectations by clarifying information or giving presentations to external audiences, and specifically communicated that iPost was not intended to entirely replace all certification and accreditation activities. If State continues to provide reliable and accurate information regarding continuous monitoring capabilities to both internal and external stakeholders, then the department should be able to effectively manage stakeholder expectations. State’s implementation of iPost has improved visibility over information security at the department by providing enhanced monitoring of Windows hosts on the OpenNet network with nearer-to-real-time awareness of security vulnerabilities. As part of this effort, State’s development of a risk scoring program has led the way in creating a mechanism that prioritizes the work of system administrators to mitigate vulnerabilities; however, it does not incorporate all aspects of risk. Establishing a process for defining and prioritizing risk through a scoring mechanism is not simple and solutions to these issues have not yet been developed at State. Neverthless, State’s efforts to work on addressing these issues could continue to break new ground in improving the visibility over the state of information security at the department. iPost has helped IT administrators identify, monitor, and mitigate information security weaknesses on Windows hosts. In addition, State officials reported that using iPost had led them to make other security improvements at their sites. However, while iPost provides a useful tool for identifying, monitoring, and reporting on vulnerabilities and weaknesses, State officials have not used iPost results to update key security documents which can limit the effectiveness of those documents in guiding future risk management activities. As part of iPost implementation, State has implemented several controls that are intended to help ensure timeliness, accuracy, and completeness of iPost data; however, vulnerability scans were not always conducted according to State’s schedule, and scanning results were uploaded to iPost in an inconsistent manner. Further, iPost data were not always consistent and complete. The acquisition and implementation of new data collection tools may help State overcome technical limitations of its scanning tool. Establishing robust procedures for validating data and reviewing and reconciling output on an ongoing basis to ensure data consistency, accuracy, and completeness can provide additional assurance to iPost users and managers who make risk management decisions regarding the allocation and prioritization of resources for security mitigation efforts at sites or across the enterprise based on iPost data. iPost provides several benefits in terms of providing more extensive and timely information on vulnerabilities, while also creating an environment where officials are motivated to fix vulnerabilities based on department priorities. Nevertheless, State faces ongoing challenges with continued implementation of iPost. As State implements additional capabilities and functionality in iPost, the need increases for the department to identify and notify individuals responsible for site-level security, develop configuration management and testing documentation, develop a continuous monitoring strategy, and manage and understand internal and external stakeholder expectations in order to ensure the continued success of the initiative for enhancing department information security. To improve implementation of iPost at State, we recommend that the Secretary of State direct the Chief Information Officer to take the following seven actions: Incorporate the results of iPost’s monitoring of controls into key security documents such as the OpenNet security plan, security assessment report, and plan of action and milestones.  Document existing controls intended to ensure the timeliness, accuracy, and completeness of iPost data.  Develop, document, and implement procedures for validating data and reviewing and reconciling output in iPost to ensure data consistency, accuracy, and completeness.  Clearly identify in iPost individuals with site-level responsibility for monitoring the security state and ensuring the resolution of security weaknesses of Windows hosts. Implement procedures to consistently notify senior managers at sites with low security grades of the need for corrective actions, in accordance with department criteria.  Develop, document, and maintain an iPost configuration management and test process.  Develop, document, and implement a continuous monitoring strategy that addresses risk, to include changing threats, vulnerabilities, technologies, and missions/business processes. In written comments on a draft of this report signed by the Chief Financial Officer for the Department of State, reproduced in appendix III, the department said the report was generally helpful in identifying the challenges State faces in implementing a continuous monitoring program around the world. In addition, State described metrics that it uses for correcting known vulnerabilities and measuring relative risks at sites. The department also concurred with two of our recommendations, partially concurred with two, and did not concur with three. Specifically, State concurred with our recommendations and indicated that it has or will (1) implement procedures to consistently notify senior managers at sites with low security grades of the need for corrective actions, in accordance with department criteria, and (2) develop, document, and implement a continuous monitoring strategy. State partially concurred with our recommendation to develop, document, and implement procedures for validating data and reviewing and reconciling output in iPost to ensure data consistency, accuracy, and completeness. The department stated that it had developed and implemented procedures for validating and testing output in iPost by scanning for vulnerabilities every 7 days and establishing three scoring components to score hosts when data collection tools do not correctly report the data required to compute a score. We agree and acknowledge in our report that the department has established these controls; however, the controls do not always ensure that if data is collected, it is accurate and complete. As mentioned in the report, we identified instances where iPost data was inconsistent, incomplete, or inaccurate, including the scoring of hosts for missed vulnerability scans when a scan had occurred. State officials make decisions about the prioritization of control weakness mitigation activities and allocation of resources based on information in iPost and so the accuracy and completeness of that information is important. Having procedures for validating data and reconciling the output in iPost will help ensure that incomplete or incorrect data is detected and corrected, and documenting these procedures will help ensure that they are consistently implemented. State also partially concurred with our recommendation that the department develop, document, and maintain an iPost configuration management and test process. The department questioned the need to have a diagram of the architecture of iPost and a written test plan and acceptance testing process, and stated that our report noted State had improved its process for testing versions of iPost. We have modified the report to provide additional context for the statement regarding State’s testing process in order to clarify any misunderstanding. In addition, as mentioned in the report, we identified areas where testing procedures were not performed or documented, including ensuring the scripts for applying the scoring rules matched the stated scoring methodology. In addition to lacking basic diagrams showing iPost interactions, we also determined that the department lacked a configuration baseline and documented approval process for iPost changes. Having a robust configuration management and testing process helps to provide reasonable assurance that iPost is configured properly, that updates or changes to the application are working as intended, and that no authorized changes are introduced, all of which helps to ensure the security and effectiveness of the continuous monitoring application. State did not concur with our recommendation for incorporating the results of iPost’s monitoring of controls into key security documents such as the OpenNet security plan, security assessment report, and plan of action and milestones. State did not provide a rationale for its nonconcurrence with our recommendation, and instead focused on providing additional information about the department’s use of metrics related to assigning risk values. As NIST guidance indicates, incorporating results from continuous monitoring activities into these key documents supports the process of ongoing authorization and near real- time risk management. In addition, as mentioned in the report, State has granted exceptions for weaknesses in iPost for periods of a year or more but has not created or updated plans of action and milestones to guide and monitor the remediation of these exceptions. Continuous monitoring does not replace the explicit review and acceptance of risk by an authorizing official on an ongoing basis as part of security authorization. The results of iPost’s monitoring of controls, including the ongoing monitoring and remediation of the exceptions, needs to be documented in order to identify the resources and timeframes necessary for correcting the weaknesses. In addition, the designated authority will need to review these results to ensure OpenNet is operating at an acceptable level of risk. In addition, the department did not concur with our recommendation to document existing controls intended to ensure the timeliness, accuracy, and completeness of iPost data because it stated that it regularly evaluates iPost data in these areas and stated that further documentation was of questionable value. However, as mentioned in our report, we identified incomplete, inconsistent, or inaccurate data in iPost during our review and could not determine if all of the controls the department told us they implemented were actually in place or working as intended. Documenting the controls helps to provide assurance that all appropriate controls have been considered and can be used as a point of reference to periodically assess whether they are working as intended. The department also did not concur with our recommendation to clearly identify in iPost individuals with site-level responsibility for monitoring the security state and ensuring the resolution of security weaknesses of Windows hosts. The department noted that it failed to understand the necessity for individually naming those staff with site-level responsibility in iPost since we had surveyed State officials regarding their use of iPost. As we noted in the report, the department relies on users to report when inaccurate and incomplete iPost data and scoring is identified, so that it may be investigated and corrected as appropriate—even though there is no list in iPost showing who is responsible for a particular operational unit. As we discovered when we surveyed State officials, there was confusion at the department as to who was responsible for operational units and information provided to us on who was responsible was incorrect for several units. To clarify this issue, we have incorporated additional context in the report on identifying individuals with responsibility for site- level security. Lastly, the department did not concur with our findings that the iPost risk scoring program does not provide a complete view of the information security risks to the department. Although the department’s response generally did not address the findings made in the report, the department did state that progress in addressing control weaknesses in iPost had led to an 89 percent reduction in measured cyber security risk and that it was impossible and impracticable to cover all areas of information security and security controls in NIST 800-53 as part of a continuous monitoring program. However, we did not state that all areas of information security and all controls in NIST 800-53 should be monitored as part of such a program. Rather, we stated that because iPost monitors only Windows devices and not other devices on OpenNet, addresses a select set of controls, and because State officials could not demonstrate the extent to which all components that are needed to measure risk— threats, the potential impacts of the threats, and the likelihood of occurrence—were considered when developing the scoring, that iPost does not provide a complete view of the information security risks to the department. Furthermore, as we mentioned in the report, the department should exercise care in implying that the lowering of scores in iPost means that risks to individual sites are decreasing as there may be other reasons for the score being adjusted that are not related to the mitigation of risk to particular hosts or sites, such as curving of scores or shifting of scores from one operational unit to another. While such activities may promote fairness, the lowering of scores may not necessarily indicate that risks to the department are decreasing. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of State and interested congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-6244 or at wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. The objectives of our review were to determine (1) the extent to which the Department of State (State) has identified and prioritized risk to the department in its risk scoring program; (2) how agency officials use iPost information to implement security improvements; (3) the controls for ensuring the timeliness, accuracy, and completeness of iPost information; and (4) the benefits and challenges associated with implementing iPost. To address our objectives, we conducted our review in Washington, D.C., where we obtained and analyzed program documentation, reports, and other artifacts specific to iPost, the scoring program and components, and data collections tools; and interviewed State officials. To address the first objective, we analyzed guidance from the National Institute of Standards and Technology (NIST) on risk management and vulnerability scoring and compared it to iPost risk scoring documentation to determine whether the department’s criteria and methodology were consistent with federal guidance. Where documentation on the department’s process for defining and prioritizing risk did not exist, we obtained information from agency officials in these areas where possible. We also interviewed agency officials from NIST to obtain information on the Common Vulnerability Scoring System and how agencies can use the scoring to more accurately reflect risk to agency environments. To address the second objective, we conducted interviews with State’s Chief Information Officer, Deputy Chief Information Officer for Operations, Chief Information Security Officer, selected Executive Directors, Regional Computer Security Officers, and Information Systems Officers or Information Systems Security Officers to obtain information on how these officials used iPost, in particular what information or summary reports they used from iPost to make decisions about security improvements and what types of security improvements were made. We analyzed the information provided by the officials to determine patterns regarding what information were used from iPost and what types of improvements were made. For the third objective, we analyzed department requirements for frequency of updates, accuracy, and completeness of iPost data to determine what controls should be in place. We obtained documentation and artifacts on department controls, or other mechanisms or procedures for each of the scoring components covered in iPost related to frequency, accuracy, and completeness of data and compared these to department requirements. Where the department lacked requirements in these areas, we analyzed our guidance on internal controls and assessment of controls for data reliability to determine what criteria should be in place to provide sufficient assurance of accurate, complete, and timely data. In areas where documentation on the department’s controls did not exist, we obtained information from department officials where possible. We also selected 15 units from the list of operational units to perform analyses to determine the frequency, accuracy, and completeness of data in iPost. Operational units were selected based on location (domestic or overseas), the number of hosts at the site, and bureau to ensure representation from among geographic and functional bureaus within the department. Frequency data obtained from iPost was tabulated to determine the number of hosts scanned and the dates scanned, and then the data was compared to the scanning schedule to determine the frequency in which scans occurred at the site for the time period of July 19, 2010, through September 8, 2010. For accuracy and completeness, we compared detailed screens on information related to vulnerability and security compliance components for each of the above 15 sites with generated reports obtained from iPost. We also obtained raw scan data from State’s scanning tool for three financial center sites and compared that to iPost to check frequency and accuracy, however, an analysis of the data obtained determined it was unusable due to inconsistencies with how the data were reformatted when viewed. In addition, for completeness, we obtained detailed screen information on the configuration settings scanned as part of the security compliance component from one site and compared the scanned settings evaluated to the list of required settings in a Diplomatic Security mandatory security setting document for Windows XP. To address the fourth objective, we analyzed federal guidance on what activities should be taken as part of implementation of continuous monitoring, as well as department policies and guidance related to information technology management and projects and compared it to department activities undertaken for iPost implementation. We also obtained descriptions of benefits and challenges from the Chief Information Officer, Deputy Chief Information Officer for Operations, Chief Information Security Officer, selected Executive Directors, Regional Computer Security Officers, and Information Systems Officer or Information Systems Security Officers. We analyzed the information obtained from the department, federal guidance, and the results of our findings for the other objectives to identify patterns related to the benefits and challenges of implementation. For our second, third, and fourth objectives, we also obtained information through a survey of individuals at domestic and overseas sites to understand iPost current capabilities as of August of 2010. We surveyed individuals at 73 of the 491 operational units in iPost. We selected survey sites by reviewing the list of operational units in iPost and chose domestic sites from among each of the functional bureaus, and overseas sites from among each of the geographic bureaus to make sure there was coverage for each bureau and region in the sample. Sites within each functional and geographic bureau were selected based on the number of hosts at the site and the current letter grade received in order to include sites with varying numbers of hosts and grade scores. We developed a survey instrument to gather information from domestic and overseas department officials on how they used iPost at their location, whether they had experienced problems with using data collection tools, and what benefits and challenges they had experienced with implementation between August 1, 2009, and August 30, 2010. Our final sample included 73 sites (36 overseas and 37 domestic). The sample of sites we surveyed was not a representative sample and the results from our survey cannot be generalized to apply to any other sites outside those sampled. However, the interviews and survey information provided illustrative examples of the perspectives of various individuals about iPost’s current and future capabilities. We identified a specific respondent at each site by either reviewing the contact list on State’s Web site or asking State officials. This person was the Information Management Officer, Information System Officer, System Administrator, or Information System Security Officer, or the acting or assistant official in one of these positions at a given site. To minimize errors that might occur from respondents interpreting our questions differently from our intended purpose, we pretested the questionnaire by phone with State officials who were in positions similar to the respondents who would complete our actual survey during four separate sessions. During these pretests, we asked the officials to complete the questionnaire as we listened to the process. We then interviewed the respondents to check whether (1) the questions were clear and unambiguous, (2) the terms used were precise, (3) the questionnaire was unbiased, and (4) the questionnaire did not place an undue burden on the officials completing it. We also submitted the questionnaire for review by a GAO survey methodology expert. We modified the questions based on feedback from the pretests and review, as appropriate. Overall, of the 73 sampled sites, 40 returned completed questionnaires and 2 of the nonresponding sites were ineligible because they had been consolidated into other sites, leading to a final response rate of 57.1 percent; however, not all respondents provided answers to every question. Two of the sites answered about their own site and other sites under their supervision; each of these was treated as a single data point (i.e., site) in statistical analyses. We reviewed all questionnaire responses, and followed up by phone and e-mail to clarify the responses as appropriate. The practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of respondents who do not respond to a question can introduce errors into the survey results. We included steps in both the data collection and data analysis stages to minimize such nonsampling errors. We examined the survey results and performed computer analyses to identify inconsistencies and other indications of error, and addressed such issues as necessary. An independent analyst checked the accuracy of all computer analyses to minimize the likelihood of errors in data processing. In addition, GAO analysts answered respondent questions and resolved difficulties respondents had answering our questions. We analyzed responses to closed-ended questions by counting the response for all sites and for overseas and domestic sites separately. For questions that asked respondents to provide a narrative answer, we compiled the open answers in one document that was analyzed and used as examples in the report. We conducted this performance audit from March 2010 to July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. A selection of key iPost screens and reports for sites are described below. The screen provides summary information on the site including: site’s grade; host summary statistics by category which provides a graphical representation of the number of hosts that are compliant or not compliant; Active Directory (AD) account information for users and computers; and network activity at the site (see fig. 6). The site risk score summary screen provides a summary of the site’s risk score summary, including the site’s grade, average risk score, and total risk score. A summary table shows the total risk score broken down by category. A graphical presentation of the risk score by component highlights components with high risk scores (see fig. 7). Detailed component screens provide breakdowns of the scoring results for each host. For example, the detailed security compliance screen (see fig. 8) identifies each host, the type of host, the date of the last security compliance scan, and the total risk score assigned the host. Users can select the details option to see more specific information on the security settings that failed compliance and the associated score that was assigned. The risk score advisory report provides a summary of all the scoring issues for the site and summary advice on how to improve the site’s score. Summary information includes the site’s grade, average risk score, and total risk score. A graphical presentation of the risk score by component highlights components with high risk scores (see fig. 9). In addition to the individual named above, Ed Alexander and William Wadsworth (Assistant Directors), Carl Barden, Neil Doherty, Rebecca Eyler, Justin Fisher, Valerie Hopkins, Tammi Kalugdan, Linda Kochersberger, Karl Seifert, Michael Silver, Eugene Stevens, and Henry Sutanto made key contributions to this report.
The Department of State (State) has implemented a custom application called iPost and a risk scoring program that is intended to provide continuous monitoring capabilities of information security risk to elements of its information technology (IT) infrastructure. Continuous monitoring can facilitate nearer real-time risk management and represents a significant change in the way information security activities have been conducted in the past. GAO was asked to determine (1) the extent to which State has identified and prioritized risk to the department in its risk scoring program; (2) how agency officials use iPost information to implement security improvements; (3) the controls for ensuring the timeliness, accuracy, and completeness of iPost information; and (4) the benefits and challenges associated with implementing iPost. To do this, GAO analyzed program documentation and compared it to relevant standards, interviewed and surveyed department officials, and performed analyses on iPost data. State has developed and implemented a risk scoring program that identifies and prioritizes several but not all areas affecting information security risk. Specifically, the scope of iPost's risk scoring program (1) addresses Windows hosts but not other IT assets on its major unclassified network; (2) covers a set of 10 scoring components that includes many, but not all, information system controls that are intended to reduce risk; and (3) assigns a score for each identified security weakness, although State could not demonstrate the extent to which scores are based on risk factors such as threat, impact, or likelihood of occurrence that are specific to its computing environment. As a result, the iPost risk scoring program helps to identify, monitor, and prioritize the mitigation of vulnerabilities and weaknesses for the areas it covers, but it does not provide a complete view of the information security risks to the department. State officials reported they used iPost to (1) identify, prioritize, and fix Windows vulnerabilities that were reported in iPost and (2) to implement other security improvements at their sites. For example, more than half of the 40 survey respondents said that assigning a numeric score to each vulnerability identified and each component was very or moderately helpful in their efforts to prioritize vulnerability mitigation. State has implemented several controls aimed at ensuring the timeliness, accuracy, and completeness of iPost information. For example, State employed the use of automated tools and collection schedules that support the frequent collection of monitoring data, which helps to ensure the timeliness of iPost data. State also relies on users to report when inaccurate and incomplete iPost data and scoring are identified, so they may be investigated and corrected as appropriate. Notwithstanding these controls, the timeliness, accuracy, and completeness of iPost data were not always assured. For example, several instances existed where iPost data were not updated as frequently as scheduled, inconsistent, or incomplete. As a result, State may not have reasonable assurance that data within iPost are accurate and complete with which to make risk management decisions. iPost provides many benefits but also poses challenges for the department. iPost has resulted in improvements to the department's information security by providing more extensive and timely information on vulnerabilities, while also creating an environment where officials are motivated to fix vulnerabilities based on department priorities. However, State has faced, and will continue to face, challenges with the implementation of iPost. These include (1) overcoming limitations and technical issues with data collection tools, (2) identifying and notifying individuals with responsibility for site-level security, (3) implementing configuration management for iPost, (4) adopting a strategy for continuous monitoring of controls, and (5) managing stakeholder expectations for continuous monitoring activities. GAO recommends the Secretary of State direct the Chief Information Officer to take a number of actions aimed at improving implementation of iPost. State agreed with two of GAO's recommendations, partially agreed with two, and disagreed with three. GAO continues to believe that its recommendations are valid and appropriate.
Since 2003, the United States has provided about $19.2 billion to develop the Iraqi security forces, first under the Iraq Relief and Reconstruction Fund (IRRF) and later through the Iraq Security Forces Fund (ISFF). DOD has apportioned about $2.8 billion in ISFF funds to purchase and transport equipment to Iraqi military and police forces. DOD does not report IRRF funds for Iraqi forces’ equipment and transportation as a separate line item. DOD has requested an additional $2 billion to develop Iraqi security forces in the fiscal year 2008 Global War on Terror budget requests. The United States restructured the multinational force and increased resources to train and equip the Iraqi forces after they collapsed during an insurgent uprising in the spring of 2004. This collapse ensued when MNF-I transferred security responsibilities to the Iraqi forces before they were properly trained and equipped to battle insurgents. Iraqi security forces include the Iraqi Army, Navy, and Air Force under the Ministry of Defense and the Iraqi Police, National Police, and Border Enforcement under the Ministry of Interior. The train-and-equip program for Iraq operates under DOD authority and is implemented by MNF-I’s major subordinate commands, including MNSTC- I and Multinational Corps-Iraq (MNC-I) (see fig. 1). This differs from traditional security assistance programs, which operate under State Department authority and are managed in country by the DOD under the direction and supervision of the Chief of the U.S. Diplomatic Mission. MNSTC-I was established in June 2004 to assist in the development, organization, training, equipping, and sustainment of Iraqi security forces. MNC-I is responsible for the tactical command and control of MNF-I operations in Iraq. MNC-I’s major subordinate commands were responsible for distributing equipment to some Iraqi security forces in 2003 and 2004. As of July 2007, DOD and MNF-I had not specified which DOD equipment accountability procedures, if any, apply to the train-and-equip program for Iraq. Congress funded the train-and-equip program for Iraq under IRRF and ISFF but outside traditional security assistance programs, which, according to DOD officials, allowed DOD a large degree of flexibility in managing the program. DOD defines accountability as the obligation imposed by law, lawful order or regulation accepted by an organization or person for keeping accurate records, to ensure control of property, documents or funds, with or without physical possession. DOD officials stated that, since the funding did not go through traditional security assistance programs, the DOD accountability requirements normally applicable to these programs—including the registration of small arms transferred to foreign governments—did not apply. Further, MNF-I does not currently have an order or orders comprehensively specifying accountability procedures for equipment distributed to Iraqi military forces under the Ministry of Defense, according to MNSTC-I officials. According to DOD officials, because Iraq train-and-equip program funding did not go through traditional security assistance programs, the equipment procured with these funds was not subject to DOD accountability regulations that normally apply in the case of these programs. For traditional security assistance programs, DOD regulations specify accountability procedures for storing, protecting, transporting, and registering small arms and other sensitive items transferred to foreign governments. For example, the Security Assistance Management Manual, which provides guidance for traditional security assistance programs, states that the U.S. government’s responsibility for equipment intended for transfer to a foreign government under the Foreign Military Sales program does not cease until the recipient government’s official representative assumes final control over the items. Other regulations referenced by the Security Assistance Management Manual prescribe minimum standards and criteria for the physical security of sensitive conventional arms and require the registration of small arms transferred outside DOD control. During our review, DOD officials expressed differing opinions about whether DOD regulations applied to the train-and-equip program for Iraq. For example, we heard conflicting views on whether MNF-I must follow the DOD regulation that requires participants to provide small arms serial numbers to a DOD-maintained registry. Although DOD has not specified whether this regulation applies, MNSTC-I began to consolidate weapons’ serial numbers in an electronic format in July 2006 and provide them to the DOD-maintained registry, according to MNSTC-I officials. Moreover, MNF-I issued two orders in 2004 to its subordinate commands directing steps to account for all equipment distributed to Iraqi security forces, including military and police. Although these orders are no longer in effect and have not been replaced, they directed coalition forces responsible for issuing equipment to the Iraqi security forces to record the serial numbers of all sensitive items such as weapons and radios, enter relevant information onto a Department of the Army hand receipt, and obtain signatures from the Iraqi security official receiving the items, among other tasks. Army regulations state that hand receipts maintain accountability by documenting the unit or individual that is directly responsible for a specific item. According to a former MNSTC-I official, hand receipts are critical to maintaining property accountability. However, the orders did not require the consolidation of all records for equipment distributed by the coalition to the Iraqi security forces. According to officials in the MNSTC-I Office of the Staff Judge Advocate, although these orders were valid when they were issued in 2004, they are no longer in effect. In addition, these orders have not been replaced with a comprehensive order or orders that address the equipment distributed to Iraqi security forces, according to MNSTC-I officials. For forces under the Ministry of Interior, MNF-I issued two new orders in December 2005 to address the problem of limited records for equipment distributed to Ministry of Interior forces. Among other guidance, the orders established accountability procedures for equipment MNC-I and MNSTC-I distribute to Ministry of Interior forces, such as Iraqi police and national police. In addition, MNF-I issued other orders related to some types of equipment. However, according to MNSTC-I officials, MNF-I has not issued an order or orders that address the accountability of all equipment distributed by coalition forces to Iraqi military forces under the Ministry of Defense. Two factors led to DOD’s lack of full accountability for the equipment issued to Iraqi security forces (see fig. 2). First, until December 2005, MNSTC-I did not maintain a centralized record of all equipment distributed to Iraqi security forces. Second, MNSTC-I has not consistently collected supporting documents that confirm the dates the equipment was received, the quantities of equipment delivered, or the Iraqi units receiving the equipment. First, until December 2005, no centralized set of records for equipment distributed to Iraqi security forces existed. MNSTC-I did not consistently collect equipment distribution records as required in the property accountability orders for several reasons. The lack of a fully operational network to distribute the equipment, including national and regional level distribution centers, hampered MNSTC-I’s ability to collect and maintain appropriate equipment accountability records. According to former MNSTC-I officials, a fully operational distribution network was not established until mid-2005, over 1 year after MNF-I began distributing large quantities of equipment to the Iraqi security forces. In addition, staffing weaknesses hindered the development of property accountability procedures, according to former MNSTC-I and other officials. For example, according to the former MNSTC-I commander, several months passed after MNSTC-I’s establishment before the command received the needed number of staff. As a result, MNSTC-I did not have the personnel necessary to record information on individual items distributed to Iraqi forces. Further, according to MNSTC-I officials, the need to rapidly equip Iraqi forces conducting operations in a combat environment limited MNSTC-I’s ability to fully implement accountability procedures. Our analysis of MNSTC-I’s property book system indicates that MNSTC-I does not have complete records confirming Iraqi forces’ receipt of the equipment, particularly for Iraqi military forces. MNSTC-I established separate property books for equipment issued to Iraq’s security ministries—the Ministry of Defense and Ministry of Interior—beginning in late 2005. At that time, they also attempted to recover past records. MNSTC-I officials acknowledge that the property books did not contain records for all of the equipment distributed and that existing records were incomplete or lacked supporting documentation. We identified discrepancies between data reported by the former MNSTC-I commander and MNSTC-I property book records (see fig. 3). Although the former MNSTC-I commander reported that about 185,000 AK-47 rifles, 170,000 pistols, 215,000 items of body armor, and 140,000 helmets were issued to Iraqi security forces as of September 2005, the MNSTC-I property books contain records for only about 75,000 AK-47 rifles, 90,000 pistols, 80,000 items of body armor, and 25,000 helmets. Thus, DOD and MNF-I cannot fully account for about 110,000 AK-47 rifles, 80,000 pistols, 135,000 items of body armor, and 115,000 helmets reported as issued to Iraqi forces as of September 22, 2005. Our analysis of the MNSTC-I property book records found that DOD and MNF-I cannot fully account for at least 190,000 weapons reported as issued to Iraqi forces as of September 22, 2005. The second factor leading to the lapse in accountability is MNSTC-I’s inability to consistently collect supporting documents that confirm when the equipment was received, the quantities of equipment delivered, and the Iraqi units receiving the equipment. We requested and received a sample of documents confirming equipment received by Iraqi units during specific weeks in February, April, July, and November 2006. Due to the limited number of these records, we cannot generalize the information across all of MNSTC-I records. Our preliminary review of this sample found that in the period prior to June 2006, MNSTC-I provided only a few supporting documents confirming that Iraqi units had received the equipment. For the period after June 2006, we found that MNSTC-I possessed more supporting documents. According to MNSTC-I officials who rotated in country in June 2006, the command began to place greater emphasis on collecting documentation of Iraqi receipt of equipment. However, MNSTC-I officials also stated that security constraints make it difficult for them to travel within Iraq and collect hard copies of all documentation. They depend instead on warehouse staff to send the receipts via scanner, fax or computer. Furthermore, the property books consist of extensive electronic spreadsheets—the January 2007 property book records for the Ministry of Defense contained 227 columns and 5,342 rows. Staff identify erroneous entries through periodic manual checks and report errors to the property book officer, according to MNSTC-I officials. Although MNSTC-I issued a draft Standard Operating Procedures handbook to help assigned personnel input data accurately and produce relevant reports, these procedures require multiple steps and could lead to the unintentional inclusion of incorrect data in calculations and reports, making them prone to error. MNSTC-I officials acknowledged they have identified numerous mistakes due to incorrect manual entries, which required them to find the original documentation to reverify the data and correct the entries. MNSTC-I officials also have acknowledged that the spreadsheet system is an inefficient management tool given the large size of the program, large amount of data, and limited number of personnel available to maintain the system. MNSTC-I plans to move the property book records from a spreadsheet system to a database management system by summer 2007. Complete and accurate records are an essential component of a property accountability system for equipment issued to Iraqi security forces. However, DOD and MNF-I cannot ensure that Iraqi security forces received the equipment as intended. DOD’s and MNF-I’s lack of clear and consistent guidance contributed to partial data collection in the field. Further, insufficient staffing, the lack of a fully developed network to distribute the equipment, and inadequate technology have hampered record keeping and data collection. Given DOD’s request for an additional $2 billion to develop Iraqi security forces, improving accountability procedures can help ensure that the equipment purchased with these funds reaches the intended recipients. In addition, adequate accountability procedures can help MNF-I identify Iraqi forces’ legitimate equipment needs, thereby supporting the effective development of these forces. To help ensure that U.S. funded equipment reaches the Iraqi security forces as intended, we recommend that the Secretary of Defense take the following two actions: Determine which DOD accountability procedures apply or should apply to the program. After defining the required accountability procedures, ensure that sufficient staff, functioning distribution networks, standard operating procedures, and proper technology are available to meet the new requirements. We provided a draft of this report to the Secretary of Defense for review and comment. We received written comments from the DOD that are reprinted in appendix II. DOD concurred with both of our recommendations and indicated that they are currently reviewing policies and procedures for equipment accountability to ensure that proper accountability is in place for the Iraqi train-and-equip program. DOD also indicated that it is important to ensure that proper staffing, financial management, property distribution, information management and communications systems are in working order. We are sending copies of this report to interested congressional committees. We will also make copies available to others on request. In addition, this report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8979 or christoffj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This report (1) examines the property accountability procedures that the Department of Defense (DOD) and Multinational Force-Iraq (MNF-I) may have applied to the U.S. train-and-equip program for Iraq and (2) assesses whether DOD and MNF-I can account for the U.S.-funded equipment issued to Iraqi security forces. Our work focused on the accountability requirements for the transportation and distribution of U.S.-funded equipment and did not review any requirements relevant to the procurement of this equipment. We performed our work from March 2006 through July 2007 in accordance with generally accepted government auditing standards. To examine the laws and regulations that govern property accountability, we reviewed the relevant legislation that has appropriated funds to train and equip Iraqi security forces, pertinent DOD regulations, and applicable U.S. military orders. We interviewed officials from the Department of State and DOD, including the office of the Deputy Undersecretary of Defense for Logistics and Materiel Readiness; Defense Security and Cooperation Agency; the Defense Logistics Agency; Tank-automotive and Armaments Command; and Defense Reconstruction and Support Office. We also interviewed current and former officials from MNF-I, including Multinational Security Transition Command-Iraq (MNSTC-I), and Multinational Corps-Iraq (MNC-I). We reviewed MNF-I’s accountability procedures for the U.S.-funded equipment it has issued to the Iraqi security forces, and we reviewed documentation from and interviewed current and former officials with the U.S. Central Command, MNF-I, MNSTC-I, and MNC-I in Baghdad, Iraq; Tampa, Florida; Washington, D.C.; and Fort Leavenworth, Kansas. To provide our analysis on the amount of equipment reported by MNF-I as issued to the Iraqi security forces, we interviewed key officials to gain an understanding of the MNSTC-I property book data and information reported by the former MNSTC-I commander. To assess the reliability of the former MNSTC-I commander’s data, we compared the data to classified information and interviewed former MNSTC-I officials about their procedures for collecting the data. Although we could not fully determine the reliability and accuracy of these data, we determined that they were sufficiently reliable to make broad comparisons against the MNSTC-I property books and to assess major discrepancies between the two reports. In assessing the documents supporting the January 2007 MNSTC-I property books, we were limited by MNSTC-I’s inability to scan large amounts of these supporting paper documents and provide them to us electronically. We obtained a sample by requesting supporting documents for 1 week in each of the following months—February, April, July, and November of 2006 (a month in every quarter)—to develop a judgmental sample. In addition to the contact named above, Judy A. McCloskey (Assistant Director), Nanette J. Barton, Lynn Cothern, Martin De Alteriis, Mattias Fenton, Mary Moutsos, and Jason Pogacnik made significant contributions to this report. David Bruno, Monica Brym, and Brent Helt also provided assistance.
Since 2003, the United States has provided about $19.2 billion to develop Iraqi security forces. The Department of Defense (DOD) recently requested an additional $2 billion to continue this effort. Components of the Multinational Force-Iraq (MNF-I), including the Multinational Security Transition Command-Iraq (MNSTC-I), are responsible for implementing the U.S. program to train and equip Iraqi forces. This report (1) examines the property accountability procedures DOD and MNF-I applied to the U.S. train-and-equip program for Iraq and (2) assesses whether DOD and MNF-I can account for the U.S.-funded equipment issued to the Iraqi security forces. To accomplish these objectives, GAO reviewed MNSTC-I property books as of January 2007 and interviewed current and former officials from DOD and MNF-I. As of July 2007, DOD and MNF-I had not specified which DOD accountability procedures, if any, apply to the train-and-equip program for Iraq. Congress funded the train-and-equip program for Iraq outside traditional security assistance programs, providing DOD a large degree of flexibility in managing the program, according to DOD officials. These officials stated that since the funding did not go through traditional security assistance programs, the DOD accountability requirements normally applicable to these programs did not apply. Further, MNF-I does not currently have orders that comprehensively specify accountability procedures for equipment distributed to the Iraqi forces. DOD and MNF-I cannot fully account for Iraqi forces' receipt of U.S.-funded equipment. Two factors led to this lapse in accountability. First, MNSTC-I did not maintain a centralized record of all equipment distributed to Iraqi forces before December 2005. At that time, MNSTC-I established a property book system to track issuance of equipment to the Iraqi forces and attempted to recover past records. GAO found a discrepancy of at least 190,000 weapons between data reported by the former MNSTC-I commander and the property books. Former MNSTC-I officials stated that this lapse was due to insufficient staff and the lack of a fully operational distribution network, among other reasons. Second, since the beginning of the program, MNSTC-I has not consistently collected supporting records confirming the dates the equipment was received, the quantities of equipment delivered, or the Iraqi units receiving the items. Since June 2006, the command has placed greater emphasis on collecting the supporting documents. However, GAO's review of the January 2007 property books found continuing problems with missing and incomplete records. Further, the property books consist of extensive electronic spreadsheets, which are an inefficient management tool given the large amount of data and limited personnel to maintain the system.
The Livestock Mandatory Reporting Act of 1999 amended the Agricultural Marketing Act of 1946. The act established a livestock marketing information program to (1) provide producers, packers and other industry participants with market information that can be readily understood; (2) improve USDA price and supply reporting services; and (3) encourage more competition in these markets. Under the act, packers were required to report livestock market information that had previously been voluntarily reported and new information not previously reported to the public—such as information about contract livestock purchases. Under the voluntary program, USDA employees, referred to as reporters, gathered information daily by talking directly with producers, packers, feedlot operators, retailers, and other industry participants; by attending public livestock auctions, visiting feedlots and packing plants; and taking other actions. Under the Livestock Mandatory Reporting Act, packers were instead required to report on their cattle and hog purchases, and their sales of beef. The act also authorized USDA to require that packers report on lambs. USDA implemented the Livestock Mandatory Reporting Act by establishing a livestock mandatory reporting program to collect packers’ marketing information and disseminate it to the public through daily, weekly, monthly, and annual reports. Packers were required to electronically report hog purchases three times each day, cattle purchases twice each day, lamb purchases once daily, domestic and export sales of beef cuts twice daily, and sales of lamb carcasses and lamb cuts once daily. As of June 2005, 116 packers and importers were required to provide information under the Livestock Mandatory Reporting Act. Two branches of USDA’s AMS administered the livestock mandatory reporting program—Market News and the Audit, Review, and Compliance Branch (ARC). Market News was responsible for collecting and generating market news reports from information supplied by packers. Market News reporters gathered and reviewed this data, contacted packers to resolve any questions they had, and prepared reports. Reporters were required to ensure that they did not breach the confidentiality of packers by providing information that would allow the public to identify an individual packer. In addition to preparing reports, Market News personnel interacted with any packers that AMS believed needed to make changes in reporting to comply with the Livestock Mandatory Reporting Act. To identify compliance problems, ARC personnel audited the transaction data of packing plants three times a year. When ARC found packers that were reporting incorrectly, ARC notified the Market News reporters, who were responsible for notifying and following up with packers until the packers reported correctly. The Secretary of Agriculture was authorized to assess a civil penalty of up to $10,000 a day per violation on a packer that violated the act. AMS designed its livestock mandatory market news reporting program with elements intended to ensure the quality of its news reports. USDA officials, for example, developed a Web-based reporting system with automated and manual screening of packer transaction data and established an audit surveillance program to ensure packers reported accurately. However, we found that while AMS had made progress, its livestock market news program fell short of ensuring reliability because AMS reporting was not fully transparent, and AMS audits of packers revealed some problems with the quality of packers’ transaction data. AMS developed a mandatory livestock market news reporting program incorporating a number of features to ensure quality. More specifically, AMS took the following steps to ensure the quality of its livestock mandatory market news reports: AMS hired two contractors to assist in developing a rapid and reliable reporting system: Computer & Hi-Tech Management, Inc. was hired to assess the capability of the packing companies to provide electronic data; and PEC Solutions developed the computer software processes upon which the mandatory livestock reporting system is now based. AMS and PEC Solutions developed a software system that allows packers to provide their transaction data on web-based forms or to upload completed files into the reporting system data base. PEC Solutions prepared an industry guide to give packers instructions for correctly submitting transaction data. PEC Solutions used programmers who did not participate in developing the systems to test the functioning of the system. AMS further tested the system using simulated production data, because packers had not started reporting actual data. As a further validation step, AMS staff manually calculated data for several reports and compared that data with data generated by the system. AMS established computer based data security controls and computerized screening of packer transaction data to ensure it is being correctly reported. AMS established an audit function to periodically test the accuracy of transaction data that packers submit to AMS by visiting packer facilities, checking documentation in support of reported transactions and testing the completeness of packers’ reports. In addition, in May 2001, the Secretary of Agriculture appointed a top level USDA team—the Livestock Mandatory Price Reporting Review Team—to review problems in its calculations of certain boxed-beef prices. In addition to reviewing that problem and making related recommendations, most of which AMS adopted, the team assessed the overall integrity and accuracy of the program. This team found that for the most part, AMS had succeeded in gathering and reporting accurate data in a timely fashion. The team’s major criticism was that AMS had not adequately tested its system to ensure it was accurately calculating data that packers had reported. Subsequently, AMS initiated further testing to ensure the accuracy of its reports. The team also found that AMS’s plan for audit surveillance of packers was behind schedule due to difficulties in hiring qualified auditors. At that time AMS had conducted audits at only 19 of the 119 packer facilities it planned to reach. Since then, AMS has overcome these problems and conducted over 1,100 audits at packers’ facilities. The Livestock Mandatory Reporting Act was intended to provide producers with readily understandable market information on a significant portion of livestock industry transactions. The quality of this information is especially important because livestock transactions negotiated each day may be influenced by AMS reported prices, and some contracts between packers and producers rely on the weighted average prices that AMS reports. AMS was authorized to make reasonable adjustments in information reported by packers to reflect price aberrations or other unusual or unique occurrences that the Secretary determined would distort the published information to the detriment of producers, packers, or other market participants. In addition, AMS should have adhered to the Office of Management and Budget and USDA guidelines for disseminating influential statistical and financial information with a high degree of transparency about the data sources and methods, while maintaining the confidentiality of the underlying information. In addition, AMS has recognized the usefulness of providing the public with information about the preparation of its market reports. We found that AMS reporters adjusted the transaction data that packers report in an effort to report market conditions, but this practice has not been made transparent. We observed that AMS reporters sometimes eliminated small numbers of apparent erroneous transactions, as would be expected. Significantly, however, we found that AMS reporters eliminated numerous low- and some high-priced transactions that they believed did not reflect market conditions, particularly when reporting on cattle. Our analysis shows that from April through June 2005, when livestock prices were declining somewhat, AMS reporters excluded about 9 percent of the cattle transactions that packers had reported to AMS, about 3 percent of the reported beef transactions, and 0.2 percent of the reported hog transactions. Excluding small percentages of livestock or meat transactions may have had a small effect on the range of prices that AMS reported and a negligible effect on weighted average prices. However, as the percent of transactions excluded increased, so too did the possibility that AMS weighted average prices would be changed from what AMS would otherwise report. Table 1 provides more details about the transactions excluded during this period. In addition, our analysis shows that from May through October 2003, when cattle prices were rising and changing to greater extents, AMS reporters excluded about 23 percent of cattle transactions packers reported to AMS. Concerning hogs, during a period of rising prices between October 2003 and March 2004, we found that 0.1 percent of hog transactions were excluded from AMS reports. Because AMS reports excluded significantly more cattle transactions, we performed further analyses on them. Tables 2 and 3 show (1) information about the cattle transactions that AMS excluded from certain livestock mandatory market news reports from May through October 2003, and (2) examples of 12 days from this period showing the effects of the transactions that AMS excluded on the reported price ranges, and weighted average prices. During the period, AMS reporters’ decisions to exclude transactions had some effect on the cattle data we analyzed in AMS reports on about one third of the days and almost no effects on the others. Further details of our analyses are discussed in appendix I and shown in appendix II. AMS guidance for its reporters on eliminating transactions is limited, lacking clarity and precision. These instructions advise AMS reporters to review transactions which packers have reported each day, and to eliminate certain low- and high-priced transactions. AMS’s varying instructions for reporters are described in table 4. Senior AMS supervisors review reporters’ decisions to eliminate transactions, and AMS headquarters officials monitor the number of—and reasons why—transactions are being excluded by reporters. AMS officials explained, in general, their reviews and adjustments are intended to exclude transactions that are outside the prevailing market price ranges, and to avoid reporting ranges of prices that appear overly broad. Furthermore, Market News officials explained that this process is conducted because they believe that livestock market reports are intended to convey overall market conditions rather than precise statistics. Also, an AMS official noted that AMS Market News reporters mostly exclude low- price transactions involving small quantities, because those transactions often are lower quality animals or products. Concerning hogs, AMS’s reporters of hog transactions said that they were verbally instructed to exclude few hog transactions by headquarters officials soon after the start of the program. AMS headquarters officials said that these verbal instructions were provided after one or more large packers complained that it appeared AMS was excluding transactions because of price alone. Given that AMS reporters’ decisions to exclude transactions modified the prices they reported, AMS has not well-explained this practice to readers of AMS livestock market news. AMS’s Web site does not address the subject, and AMS livestock mandatory market news reports are unqualified. Some agricultural economists who study the livestock market and other industry experts we interviewed said that they were not aware of the extent of adjustments that AMS made. An AMS official explained that AMS has not previously provided public information on this process because it would be difficult to capture the nuances of AMS’s report preparation in a public document. Nevertheless, AMS previously acknowledged that it may be useful to provide information to the public about types of adjustments that it makes to its livestock mandatory market news reports. AMS officials also recognized that it would be desirable for AMS to improve its instructions for reporters and disclose more about its reporting practices to livestock market news report readers. Our review of AMS’s database indicates that further analyses could provide AMS with more information about the reasons why reporters eliminate transactions, the consistency of reporting, as well as the extent of changes in AMS’s presentation of prices. AMS’s Livestock and Seed Program Deputy Administrator said that, as a result of the information we brought to his attention, he had started to improve the reporters’ instructions. Since AMS reports help provide the industry with signals about when, where, and at what price to buy and sell livestock and meats, some industry participants may have been guided to somewhat different decisions on certain days if they had a greater understanding of AMS report content. In addition, the lack of transparency over the content and preparation of the livestock mandatory market reports may have also limited the confidence that some readers place in AMS reports. ARC regularly audited packers to provide assurance that the packers reported all of their transactions accurately and in compliance with AMS’s regulations. The quality of AMS reports depends on packers submitting correct transaction information. Once every 4 months, ARC auditors visited each of the 116 packers’ plants, or associated company headquarters, to review livestock transaction data. These audits usually included: (1) a test of the completeness of the packer’s reports, and (2) a detailed review of a sample of transactions to determine that each transaction in the sample was reported accurately and was supported by appropriate documentation. ARC has conducted over 1,100 audits at packers’ facilities since 2001. Detailed information was available for 844 of these audits conducted over the 36 months ending in April 2005. Table 5 contains additional information about the content of ARC audits. Of the 844 AMS audits for which data were available, 540—64 percent— identified one or more instances when it appeared that packers did not meet AMS reporting standards. The other 304 audits, or about 36 percent, did not identify any such instances. AMS audits detected a wide variety of packer reporting inaccuracies such as the omission of livestock slaughtered, underreporting of purchases, delayed reporting of livestock purchases and meat sales, price inaccuracies, and the misclassification of transactions. While noting the frequency of AMS audit findings, AMS officials commented that packers’ reporting errors were of concern. AMS officials also said that its audit results should be considered in the context of the volume of transactions that AMS reports—compared to the hundreds of thousands of pieces of transaction data that packers reported daily, the errors identified by AMS audits were relatively few. However, our review shows that AMS findings are based on audits of a small portion of packers' transactions, and it is likely that there have also been errors in packers’ unaudited transactions. Furthermore, a closer look at 86 AMS audits completed from June through September 2004 shows that AMS identified 46 instances when 22 packers submitted incorrect transaction data that AMS classified as possibly affecting the accuracy of AMS reports. Table 6 provides examples of AMS audit findings. AMS officials said many ARC audit findings were minor and usually had little effect, if any, on the accuracy of AMS reports. In addition, they also said that since 2001, packers had clearly improved their reporting of transactions. AMS officials said that because of the overall improvement in packers’ reporting of transactions, they reduced the frequency of audits at each packer from four to three times a year. Our review provides some support for AMS officials’ view that packers were reporting better than at the outset of the program. From May 2002 through April 2005, the number of AMS audits with findings as a percent of total audits decreased each year, from 76 percent in 2002 to 55 percent in 2005. In addition, the average number of audit findings per audit decreased from 1.8 to 1.4 over that period. Moreover, in the first quarter of 2005, AMS audits did not identify any problems that rose to its highest level of concern. Nevertheless, AMS classified 22 percent of the problems it identified in the first quarter of 2005 as possibly having some adverse effect on the accuracy of its reports. In addition, follow-up was sometimes lengthy on problems ARC auditors identified. Our analysis of follow-up efforts by AMS on the 86 audits it conducted between June through September 2004, showed that, on average, about 85 days elapsed between the date of an AMS audit and the date AMS recorded that the packer had made the needed corrections. AMS reporters frequently contacted packers to convey information about the correct way for packers to report. Their outreach was prompted both by audit findings and by reporters’ reviews of the packers’ data. When recurring reporting problems arose, headquarters officials issued internal guidance to clarify proper reporting procedures for both auditors and reporters. On at least two occasions, AMS reporters provided information from this internal guidance to packers to clarify proper reporting procedures. However, some packers, including three of the largest packers, did not promptly correct reporting problems that AMS identified. Since 2002, AMS sent 11 packers 21 letters to call to the packers’ attention apparent delays in correcting reporting issues and warning the packers that penalties might be applied should there be further delays in addressing these issues. Of these, AMS sent 8 letters to 6 packers between January 2004 and September 2005, with 6 letters involving cattle and 2 involving hogs. In addition, twice AMS levied fines on packers of $10,000, although these fines were suspended provided these packers went a year without additional violations of the Livestock Mandatory Reporting Act. As of September 2005, AMS had continuing issues with 2 of 11 the packers that received AMS warning letters. Appendix III contains additional information on the issues leading to AMS warning letters to packers. While AMS audit reports identified many problems in packers’ reporting of transactions, there are two reasons why the reports do not provide a clear basis for assessing the overall accuracy of packers’ data which underlie AMS livestock mandatory market news reports. First, AMS did not select transactions for audit in a manner that would enable AMS to project the overall accuracy of packers’ transaction data. Second, AMS did not develop analyses that demonstrate the overall accuracy of information in its reports. We explored two approaches with AMS officials to (1) obtain better indications of the overall accuracy of packers’ transaction data, and (2) better direct future AMS audits. First, AMS audits did not provide a basis for projecting the overall accuracy of packers’ transaction data. Another approach, in which AMS would periodically audit a statistical sample of transactions, might provide a basis for projecting the overall accuracy of the transactions. Second, AMS could analyze its audit results, focusing on findings of consequence and its follow-up efforts to address those findings. Such analyses could be useful for identifying the relative frequency of concerns with packers’ transaction data, the types of recurring errors, the timeliness and consistency of auditor and market news follow-up on packer’s actions to address reporting issues, and the overall effectiveness of AMS efforts to quickly resolve reporting issues. AMS officials indicated that these suggestions appeared to be reasonable and that they would consider taking both steps. AMS data show that from April through June 2005, 4 percent, 5 percent and 7 percent of selected cattle, beef and hog data, respectively, were received from packers by AMS after the deadlines set by the Livestock Mandatory Reporting Act. Nevertheless, AMS officials said that while some packers missed the reporting deadlines, most usually submitted their transaction data within minutes thereafter—giving AMS reporters enough time to include almost all transaction data in market news reports. In addition, AMS officials said that if some reporting deadlines and publication times set in the Livestock Mandatory Reporting Act were changed, this would help packers working on the west coast meet the reporting schedule and help AMS meet changing market conditions. GIPSA and AMS coordination has been limited, primarily due to the legal authority within which each operates. AMS implemented and enforced the Livestock Mandatory Reporting Act. While the Livestock Mandatory Reporting Act called for the establishment of a mandatory reporting program, it required information be made available to the public in a manner that ensured the confidentiality of the identity of persons and proprietary business information. Such information could not be disclosed except (1) to USDA agents or employees in the course of their duties under the Livestock Mandatory Reporting Act, (2) as directed by the Secretary or the Attorney General for enforcement purposes, or (3) by a court. AMS officials said that they have shared packer transaction data with GIPSA when requested for specific investigations. GIPSA implements and enforces the Packers and Stockyards Act. GIPSA monitors livestock markets and investigates when it has reason to believe there have been violations of the act. Since 1999 when the Livestock Mandatory Reporting Act was adopted, there have been two cases where GIPSA formally requested access to a packer’s transaction data from AMS for specific investigations. AMS provided access as GIPSA requested. One investigation involved hogs, and the other, lamb. In one case, opened in October 2002, GIPSA investigated whether a packer was manipulating reported prices in AMS’s livestock mandatory reporting program to reduce its procurement costs. GIPSA did not identify a violation of the Packers and Stockyards Act, and closed this investigation in 2005. However, GIPSA identified instances in which the packer’s reports of negotiated livestock purchases met the documentation standards of the Packers and Stockyards Act, but may not have met the standards of the Livestock Mandatory Reporting Act. In September 2005, GIPSA officials briefed AMS officials on their investigation, and suggested that AMS consider whether the packer was complying with the Livestock Mandatory Reporting Act. In response to our further questions about this case, officials of AMS and GIPSA said that they would consider additional inquiry or investigation under both statutes to determine if there have been repeated transactions reported to AMS for which the packer lacks certain documentation. In the second case, GIPSA investigated the possibility that a packer paid less for livestock as a result of providing undue preference to a select group of producers. GIPSA initiated this case in May 2002 and closed this case in September 2005. GIPSA officials said that individual packer transaction data held by AMS would be useful for monitoring competitive behavior in livestock markets. However, because GIPSA could not obtain that confidential information unless the Attorney General or the Secretary directed disclosure of the information for enforcement purposes, GIPSA is making due with the publicly available AMS livestock market report data. This monitoring effort is limited because AMS reports do not include the company-specific transaction data that might reveal anti-competitive behavior. More specifically, GIPSA uses publicly available AMS report data in cattle and hog price monitoring programs to forecast market prices for comparison with actual prices. If there are notable differences, GIPSA officials attempt to assess whether economic conditions could be responsible. Should GIPSA find that a difference was not readily explained by economic conditions, then GIPSA would further investigate to determine if anti- competitive behavior of individual firms were involved. At such a point, GIPSA may request that AMS provide company specific livestock transaction data for GIPSA’s investigation. GIPSA officials said that while this monitoring effort is less informative than one that would rely on company specific transaction data, their monitoring programs are relatively new and they have not identified better alternatives at this point. AMS has not achieved the level of transparency needed for establishing the reliability of its livestock market news reports—a level that would more fully disclose to market participants and observers its practices in reviewing packers’ transactions, and the effects on AMS reports. Without further disclosure of its reporting practices, market participants are less informed than they should be about (1) AMS reporters’ reviews, (2) AMS decisions on presenting prevailing prices, and (3) the results of AMS audits of packers’ transactions. Also, the lack of precision and clarity in AMS’s varying instructions for its reporters has led to inconsistent reporting approaches, which could adversely affect readers’ confidence in AMS reports. AMS market news readers should have information that enables them to understand AMS’s approach to reporting prices, and to have confidence that the approaches are based on sound statistical, economic, and reporting guidance. In addition, the problems which AMS audits identified in packers’ transaction information warrant continued vigilance if the mandatory reporting program is renewed. Unless AMS takes some additional steps, it will not have information to (1) assess the overall accuracy of packers’ transaction data, (2) focus its audit efforts on recurring significant problems, and (3) ensure that prompt and consistent action on audit findings is being taken. Concerning the GIPSA investigation in which GIPSA raised questions about a packer's documentation of its transactions, unless AMS and GIPSA complete further investigative work, neither agency can have assurance of the accuracy and propriety of the packers’ transactions. Should Congress extend the Livestock Mandatory Reporting Act, we recommend that the Secretary of Agriculture direct the Administrator, Agricultural Marketing Service to: Increase transparency by (1) reporting to market news readers on its reporters’ instructions for making reporting decisions that reflect prevailing market conditions, (2) periodically reporting on the effects of reporters’ decisions on AMS reported prices, and (3) reporting the results of its audit efforts. Clarify AMS reporter’s instructions to make them more specific and consistent by (1) consulting with packers, producers, agricultural economists, and other interested stakeholders, and (2) undertaking revisions that consider economic analyses of past reporting trends, livestock and meat market variations, and federal statistical and information reporting guidance. Develop information about the overall accuracy of packers’ transaction data by auditing a statistical sample of packers’ transactions. Further develop AMS audit strategies to identify recurring significant problems. Address the timeliness and consistency of AMS reporters’ efforts to follow-up on audit findings. We also recommend that the Secretary of Agriculture direct the Administrators of the Agricultural Marketing Service and the Grain, Inspection, and Packers and Stockyards Administration to further investigate the reporting practices of one packer’s low-price purchases of livestock. We provided USDA with a draft of this report for review and comment. In a memorandum dated November 18, 2005, we received formal comments from USDA’s Acting Under Secretary for Marketing and Regulatory Programs. These comments are reprinted in appendix IV. We also received oral technical comments from AMS and GIPSA officials, which we incorporated into the report as appropriate. USDA generally agreed with our findings and recommendations, and discussed the actions it has taken, is taking, or plans to take to address our recommendations. Among other things, USDA stated that AMS would (1) prepare publicly available reports on the volume of transactions excluded by reporters and their effect on reported prices, and take steps to increase public awareness of reporting methods and processes; (2) clarify AMS reporters’ instructions while following federal and departmental statistical and information reporting guidance; (3) post quarterly audit information to its website and identify additional audit information to add in the future; (4) develop auditing methods to allow conclusions to be drawn about overall data accuracy; (5) review its auditing methods to increase the overall effectiveness of the compliance program; and (6) conduct further inquiry into the issues raised during one of GIPSA’s investigations. Concerning the transactions that AMS excluded from its market news reports, USDA agreed that 22.8 percent of cattle transactions were excluded from May to October 2003. USDA added that AMS reporters excluded some transactions during that period because its computer system could not differentiate between the base and net prices for certain cattle sales. Our review indicates that AMS exclusions for that reason were part of the story. More specifically, AMS reporters’ log entries showed that of the transactions AMS excluded from May to October 2003, about 24 percent were excluded for reasons relating to base prices, while about 34 percent of the transactions were excluded to narrow the range of prices that AMS reported, and the remainder were excluded for a variety of other reasons such as small head count, small lots, low weight, mixed lots, or grade of cattle. In addition, AMS suggested that its programming change to differentiate base and net prices led to fewer exclusions–8.8 percent-- during the April through June 2005 period. While we agree that is part of the explanation, we believe, if the livestock mandatory program is renewed, that AMS needs to focus on the bases and methods for excluding transactions, and especially the extent to which AMS will be excluding transactions when prices are again rapidly changing, such as they did in 2003. AMS also stated that care should be exercised when drawing conclusions about packer compliance because packers’ errors are relatively few compared to the 500,000 data elements packers may have submitted on some days. We believe insufficient information is available to assess the overall quality of packer data. AMS audits only focused on a small portion of the data submitted by packers, and it is likely that packers’ unaudited transactions contain errors as well. We continue to believe that packer reporting problems that AMS identified warrant continued vigilance should the program be renewed and recommend that AMS develop auditing methods to allow conclusions to be drawn about overall accuracy of packer’s data. As agreed with your staffs, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretary of Agriculture; the Under Secretary for Marketing and Regulatory Programs; the Administrators of the Agricultural Marketing Service and the Grain Inspection, Packers and Stockyards Administration; and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge at GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or robinsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. Our objectives were to review the extent to which (1) the U.S. Department of Agriculture’s (USDA) Agricultural Marketing Service (AMS) takes sufficient steps to ensure the quality of its livestock mandatory market news reports, and (2) AMS and the Grain Inspection, Packers and Stockyards Administration (GIPSA) coordinate efforts to encourage competition in livestock markets. To review AMS’s steps to ensure the quality of its reports, we visited the two Market News Branch (Market News) field offices in Des Moines, IA, and St. Joseph, MO, and spoke with AMS reporters about their responsibilities related to mandatory price reporting and observed them as they prepared livestock mandatory reports for cattle, beef, hogs, lamb and lamb meat. To test AMS’s computerized reporting system, we obtained and analyzed unpublished data from AMS’s livestock mandatory reporting database for beef, cattle, and swine. For this analysis, we used data reported by packers through the Live Cattle Daily Report (Current Established Prices) (LS-113), Swine Daily Report (LS-119), and Boxed Beef Daily Report (LS-126) contained in AMS’s livestock mandatory reporting database. We reviewed USDA documents on the report preparation and data storage system and analyzed the flow of data into and through the system. We performed electronic testing and validation of system data developed for us from data available in the AMS system. We found the data were sufficiently reliable to support our analyses. We also replicated elements of certain reports—the Five Area Daily Weighted Average Direct Slaughter Cattle Report and the National Daily Direct Morning Hog Report—that livestock experts told us were important to livestock producers. In addition, we examined transactions reporters excluded from AMS reports. First, we examined transactions made between April and June 2005. More specifically, we reviewed data packers submitted on the Live Cattle Daily Report (Current Established Prices) (LS-113), Swine Daily Report (LS-119), and Boxed Beef Daily Report (LS-126) and compared it with the reports published during this period. Second, we examined transactions AMS excluded from its reports during periods of rapidly rising cattle and hog prices—for cattle, transactions excluded by reporters for a key category of live and dressed cattle prices from May through October 2003; for hogs, those excluded from October 2003 to March 2004. To determine which transactions were eliminated for market reasons, we reviewed the reporter log field in the database. The logs identify transactions eliminated for various reasons, such as price, low price, high price, or lot size. We analyzed data from all days reported for this time period in the 35 to 65 percent choice steer grade of the Five Area Weighted Average Direct Slaughter Cattle Report. We then calculated the weighted average prices with and without the excluded transactions and the difference between these prices. In addition, we performed a statistical test to determine whether the difference between the prices, as a group, was statistically significant. We discussed how AMS performed audits to ensure packers were complying with the Livestock Mandatory Reporting Act provisions with AMS’s Audit Review and Compliance (ARC) officials in USDA headquarters, and with auditors in both Des Moines and St. Joseph. As part of this effort, we obtained and reviewed the mandatory price reporting audit reports that ARC conducted from May 2002 through April 2005. In particular, we used ARC’s database of audit reports to analyze the number of audits conducted over the time period, the number of findings related to those audits, and other information. ARC officials and our analysis indicated that the number of audit reports in the database closely approximated the number of audits conducted. We found this database to be sufficiently reliable for this purpose. Because this database did not provide specifics on the reasons AMS believed some companies were out of compliance, we performed a detailed review of all audit reports during one 4-month audit cycle from June through September 2004. We also obtained information from AMS Headquarters officials regarding the formal warning letters they sent packers and the penalties they assessed. We analyzed ARC’s audit methodology for sampling transactions and the extent to which that sample of transactions could provide information on packer compliance and the accuracy of the reported prices. In addition, we reviewed ARC policy and procedures, the audit report database, and had discussions with ARC officials and auditors. Specifically, we interviewed ARC officials regarding their audit methodology with emphasis on their sampling methodology, and we reviewed their documentation on sample selection. Furthermore, to analyze the agency’s sampling procedure, we compared the time between the audit field visit and the days selected for the audit of a full day’s transactions, and the audit of a sample of transactions over the 4-month audit cycle from June through September 2004. To determine the extent of coordination between GIPSA and AMS, we reviewed their legislative authority, identified activities and investigations involving both agencies, and reviewed GIPSA case file documentation from the competition-related investigations in which GIPSA obtained packers’ transaction data from AMS. We met with USDA Headquarters officials from AMS and GIPSA. In Des Moines, we met with GIPSA’s Packers and Stockyards Programs regional officials, and on separate occasions, spoke with GIPSA’s Denver Regional Office officials regarding GIPSA and AMS coordination. During the course of our review, we identified and obtained the views of several industry groups and associations representing packers and producers. We also interviewed several nationally recognized economic experts knowledgeable about mandatory price reporting and related market issues. We conducted our review between February and November 2005 in accordance with generally accepted government auditing standards. Overall, from April 2005 through June 2005, we found approximately 8.8 percent of cattle transactions, 0.2 percent of hog transactions, and 2.7 percent of boxed beef transactions were eliminated. From May 2003 to October 2003, a period of rapidly rising prices, we found that approximately 22.8 percent of all cattle transactions were excluded from AMS reports. Figure 1 shows that close to 95 percent of all excluded dressed weight cattle transactions from negotiated sales were smaller lots—groupings of cattle for sales purposes—of fewer than 25 cattle. However, as figure 2 shows, the proportion of negotiated live cattle transactions that were eliminated consisted of lots that were relatively larger than dressed cattle lots and more consistent in size; about 75 percent of lots were greater than the 0 to 25 lot category and over 10 percent were between 201 and 400 head of cattle. Information on the size distribution of excluded lots is relevant because excluding large lots could have a relatively greater impact on weighted average prices reported by AMS than smaller lots. Also, the effects of excluding large lots could be greater in daily reports when trade volume is light, and an accumulation of excluded large lots could affect weekly and monthly reports. Market News reporters of hog trade eliminated significantly fewer transactions than the cattle reporters early on in the livestock mandatory reporting program. For hogs, from October 2003 to March 2004, we found that approximately 0.1 percent of transactions were excluded, which was less than 0.1 percent of all hogs. Figure 3 shows that, for negotiated sales, while nearly 40 percent of excluded transactions were smaller lots of 50 hogs or less, the largest category of slaughtered swine excluded—over 35 percent—were somewhat larger lots, in the 151–200 head lot category. During a sample period of rapidly rising prices, our analysis of cattle and hog livestock data shows that the elimination of transactions from Market News reports narrowed price ranges while having a limited, but frequently positive, effect on the average reported price. To illustrate this process, figures 4 and 5 show the differences in the distributions of cattle prices for dressed steers from May through July 2003 and how reporters’ exclusion of cattle transactions eliminated outlying prices and narrowed the range of prices. During this same time period, reporters’ exclusions decreased the number of packer transactions from 4066 to 3334. Excluding these transactions narrowed the associated price range—the difference between the minimum and maximum price—from $117.95 to $16.50 per hundredweight. Market News reporters’ elimination of data for market reasons from reports between May and October 2003 had the effect of narrowing price spreads or ranges on a daily basis. For dressed steers, figure 6 shows the narrowing of the range of prices over this period before and after all excluded transactions, most of which were excluded for market reasons. As shown in the figure, price ranges before any excluded transactions during this period were from $2 to $20 per hundredweight while, after all market exclusions were made, the range decreased to between $0 and $12 per hundredweight. Market News reporters are instructed to exclude prices that are $5 above or below the market to narrow the range of reported prices and AMS record logs indicate that they do so. However, when prices are rising or falling rapidly, this practice may exclude some transactions that should reasonably be presented as reflecting the day-to- day variations in the market. Also, since these are national daily reports, price spreads tend to be larger since they encompass the full range of prices for all regions. During May to October 2003, a period of rapidly rising cattle prices, we estimate that the effect of eliminating transactions for market reasons was negligible about two-thirds of the time, while for the remaining third the reported average prices were generally higher than they would have been had these transactions not been eliminated. For live cattle sales, figure 7 displays the differences between the average weighted daily prices after AMS exclusions (as reported in Market News reports) and the average weighted prices based on including the transactions that AMS had excluded for market reasons for 35–65 percent choice steers from May through October 2003. The average weighted prices published by AMS for these dates were the same about 67 percent of the time, higher 31 percent of the time, and lower 2 percent of the time over this period. This suggests, and Market News record logs confirm, that during this period when Market News reporters were excluding transactions, they were predominantly excluding transactions for reasons of lower price rather than high price. We found that over twice as many transactions were excluded for low price as for high price during this period. For 35 to 65 percent choice steers, dressed weight, figure 8 shows the differences between the daily weighted average prices reported by AMS, and the average prices that AMS would have reported if AMS reporters had not eliminated transactions for market reasons. These differences display a trend similar to the one we identified for live cattle prices. When we compared our calculations of the weighted average prices with those AMS reported, about 32 percent of the price differences were higher than those AMS would have reported; about 67 percent were the same or about the same, and 1 percent were lower. This result indicates that market reporters of livestock were excluding a higher proportion of low prices during this period. AMS reporters may have excluded low prices more frequently during the period because prices were rising. What a reporter considered to be a high price during one week may have appeared to be a much lower price by the following week. Also, at the low end of the price ranges, transactions may have been excluded because the prices represented low- quality animals. The effect of an excluded transaction on any particular day is determined by how large that transaction is compared to the size and number of transactions that took place on that day or that week, and how far it is from the range of reported prices. While each transaction alone may be considered a small lot, the total effect of a number of excluded transactions for this reason can cumulatively have a large effect on the weighted average price. To determine if there was an overall statistical difference between our replications of AMS prices and the prices we determined would have been reported had reporters not eliminated transactions for market reasons, we tested the two average weighted price series for both live and dressed cattle. We found that for both live and dressed weight cattle, there was a statistically significant difference in the weighted averages between reported AMS prices and the prices that would have been reported if exclusions had not been made for market reasons. Our analysis of data from AMS’s daily hog reports from October 2003 to March 2004 showed that, for the reports that we examined, reporters frequently eliminated transactions that they believed to be errors that would potentially widen price ranges. However, unlike cattle, there were very few transactions eliminated from reports for market reasons. As a result, for hogs, price ranges with and without exclusions by market news reporters were more similar than for cattle. As illustrated in figure 9, the difference between prices reported by AMS and prices that would have been reported by Market News was notable on only 7 days for the National Daily Direct Morning Hog Report from October 2003 through March 2004. A similar analysis of the afternoon hog report shows the same pattern. Incorrectly rounded report sale prices to the nearest Did not report all sales required by Livestock Mandatory 3/18/03—Letter from Deputy Administrator (in response to 2/10/03 letter from packer) Issue pending. Market News reviewing results from 9/15/05 audit and will determine if further action is warranted. Issue pending. Market News reviewing current information provided by packer. In addition to the individual above Charles Adams, Assistant Director, Aldo Davila, Barbara El Osta, Paige Gilbreath, Kirk Menard, Lynn Musser, Karen O’Conor, Alison O’Neill, Vanessa Taylor and Amy Webbink made key contributions to this report.
Livestock producers, with gross income of $63 billion in 2004, depend on USDA's daily, weekly, and monthly livestock market news reports. These reports provide them and others in the industry with livestock and meat prices and volumes, which are helpful as they negotiate sales of cattle, hogs, lamb and meat products. Packers also use the average prices in these reports as a basis for paying some producers with whom the packers have contracts. In 1999, the Livestock Mandatory Reporting Act was passed to substantially increase the volume of industry sales transactions covered by USDA's market news reports and thereby encourage competition in the industry. In the context of ongoing discussions about the renewal of this act, GAO reviewed (1) USDA's efforts to ensure the quality of its livestock market news reports and (2) the coordination between two USDA agencies that are responsible for promoting competition in livestock markets. While the U.S. Department of Agriculture (USDA) took important actions to produce quality livestock market news reports, GAO found that USDA could improve the reports' transparency. Although packers with large plants must report all of their livestock transactions to USDA, GAO found that USDA market news reporters regularly excluded some transactions as they prepared USDA's reports. For example, GAO's analysis showed that from April through June 2005, USDA reporters excluded about 9 percent of the cattle transactions that packers had reported. When USDA excluded transactions, this sometimes changed the low, high, and average prices that USDA would have otherwise reported. However, USDA has not informed its readers of the extent of this practice. Moreover, USDA's instructions for guiding its market news reporters as they prepared their reports lacked clarity and precision, leading to inconsistency in their reporting decisions. In addition, GAO found the accuracy of USDA's livestock market news reports is not fully assured. About 64 percent of 844 USDA audits of packers--conducted over 36 months ending in April 2005--identified packers' transactions that were inaccurately reported, unsupported by documentation, or omitted from packers' reports. Moreover, some packers have not promptly corrected problems. Since 2002, USDA has sent 11 packers 21 letters urging the packers to correct longstanding problems and warning them of the consequences of delay. Twice USDA has levied $10,000 fines on packers, but suspended the fines when these packers agreed to comply. As of September 2005, USDA had continuing issues with 2 of the 11 packers. USDA officials noted that packers' errors are relatively few compared to the large volumes of data that packers report daily. However, USDA has not (1) assessed the overall quality of packers' data, (2) used its audit results to help focus future audit efforts, and (3) ensured that follow-up promptly resolves problems. Two USDA agencies have addressed competition in livestock markets--the Agricultural Marketing Service (AMS) and the Grain Inspection, Packers and Stockyards Administration (GIPSA). GAO found the coordination between these agencies to be limited, primarily due to the legal authority within which each operates. AMS has implemented the Livestock Mandatory Reporting Act. That act did not provide authority for AMS to share individual packer transaction data within USDA except for enforcement purposes. In two investigations, AMS provided packers' data to GIPSA. On the other hand, GIPSA enforces the Packers and Stockyards Act and is responsible for addressing unfair and anti-competitive practices in the marketing of livestock. Furthermore, GAO found that GIPSA monitors cattle and hog markets by analyzing publicly available livestock market news reports--an approach that has limitations because it lacks the company-specific information that would be useful for detecting anti-competitive behavior.
VA’s pension program is means-tested and provides a minimum level of economic security for veterans with financial need. It is one of two cash benefits programs administered by VA. The other is disability compensation, which pays benefits to veterans who have disabilities related to their military service—often referred to as “service-connected” disabilities. The pension program, on the other hand, which is the subject of this report, pays benefits to low-income veterans who either are elderly or have disabilities unrelated to their military service. Each program also provides benefits to eligible survivors. A veteran who applies for and meets requirements for both disability compensation and pension benefits will receive benefits through whichever program provides higher benefits. In 2006, VA paid over $34 billion in compensation and pension benefits to about 3.5 million veterans and survivors. Of this amount, $30.9 billion was paid in compensation benefits to 3,014,282 veterans and their survivors. The remaining $3.5 billion was paid in means-tested pension benefits to 535,380 veterans and their survivors. The amount of financial assistance provided by the pension program is relatively modest and intended to raise pensioners’ incomes to a level set out in statute. Pensioners are awarded an amount equal to the difference between their countable income, as determined by VA, and the maximum pension amounts as updated annually by statute. The maximum pension amount varies according to the pensioner’s current income and number of dependents. In 2006, veterans with no income and no dependents could receive as much as $10,579 annually, while survivors with no income and no dependents could receive a maximum of $7,094 per year. Pensioners are required to report any changes in income, dependency, or other relevant circumstances to VA so that benefit levels can be adjusted accordingly. Generally, for each dollar of income received from other sources, the VA pension is reduced by the same amount. To determine a veteran’s initial eligibility for the pension program, VA’s regional office staff employ several criteria, including the veteran’s military status, age or disability, and income. (App. II provides a summary of this process.) Eligibility for pension benefits is restricted to veterans who are at least 65 or have total and permanent disabilities unrelated to their military service. Also, VA considers the income of all family members, including spouses and children, but excludes the income of other individuals residing in the household. Various sources of income are considered when determining income eligibility, including employment, interest and dividends, retirement, annuities, workers’ compensation, Social Security retirement and Disability Insurance benefits. Unreimbursed medical expenses that exceed 5 percent of the maximum pension amount may be deducted from income in determining eligibility. Eligibility for the surviving spouses and children of such veterans is based on similar factors. Once pensioners have been awarded benefits, VA makes ongoing eligibility determinations and adjusts benefit levels as needed. Pensioners are required to inform VA of any changes in their circumstances—such as hospitalization or incarceration, as well as changes in income and assets—that could affect their eligibility or benefit levels. To further assess ongoing eligibility and benefit levels, VA also requires pensioners who have any income other than Social Security to file an annual report with VA. Then, VA evaluates the information in this report to determine if pensioners continue to meet eligibility requirements. In 2006, most VA pensioners had nonpension incomes well below the federal poverty level, were beyond retirement age, and had multiple impairments, and the population had been decreasing in number. In addition to low incomes, the majority of VA pensioners had few assets and limited education. Since 1978, the total pension population has been decreasing, although there have been increases in numbers of pensioners from more recent service periods, including the Vietnam era and the Gulf War. Most pensioners have very low annual incomes outside of their pension benefits. According to our analysis of VA data, veteran pensioners’ average annual nonpension income was $4,573 in 2006. This was well below the 2006 federal poverty level of $9,800 for a single adult. Survivors had a lower average annual nonpension income of $3,046. Both veterans and survivors under 65 had lower average annual nonpension incomes than those 65 and older. Social Security benefits and non-Social Security retirement income accounted for much of the difference, as shown in table 1. When VA pension benefits are included, most veterans had annual incomes above the federal poverty level. This is not true of survivors, who receive smaller pension awards than veterans. Pension benefits in 2006 averaged $8,232 per year for veterans and $4,260 per year for survivors, for an average total income, respectively, of $12,805 and $7,306. In a 2002 VA study, pensioners reported having few assets and low levels of education. Less than half of pensioners reported owning their own homes, with ownership rates for spouses being lower than those for veterans. Even fewer pensioners owned vehicles. About one-third of veterans reported owning a car, and only about one-fifth of spouses reported owning a car. Moreover, pensioners generally reported low levels of education, with those over age 65 reporting less education than those under age 65. More than half of veterans and spouses over age 65 reported not having a high school diploma, compared with 22 percent of veterans under age 65 and 44 percent of spouses under age 65. Over a third of pensioners under age 65 reported having a high school diploma, and less than 7 percent reported completion of a bachelor’s or higher degree. The average age of VA pensioners is approximately 70, with approximately 60 percent over age 65 and less than 20 percent age 55 or younger. Significant numbers are over age 75, as shown in figure 1. The average age is highest for surviving spouses of deceased veterans, who constitute approximately one-third of all pensioners. In 2006, their average age was 72, while that of veterans was 69. About three-quarters of all surviving spouses are over age 65, compared to just over half of all veterans. Less than 2 percent of all pensioners are younger than age 45. Most VA pensioners have no spouse or dependent children, according to information the pensioners provided to VA in 2006. As shown in figure 2, about 82 percent of pensioners receive benefits for themselves alone, and most of the remaining 18 percent are veterans living with dependents. Proportionately few pensioners have dependent children eligible for pension benefits: About 22,000 of the half-million pensioners receive VA payments for support of their children, in most cases for one or two children. Most veteran pensioners have multiple disabling conditions, with approximately 95 percent reporting at least one impairment and nearly 75 percent reporting two or more impairments. As shown in figure 3, excluding those impairments classified as “other,” musculoskeletal conditions were the most common type of impairment among veteran pensioners. For veterans under age 65, the next most common type of impairment was mental, while for veterans age 65 and older it was cardiovascular. Many pensioners require aid and attendance in their activities of daily living, such as dressing and feeding themselves. According to VA, slightly less than one-third of pensioners are housebound or in need of aid and attendance, and most of these pensioners are age 65 and older. While more than 40 percent of pensioners age 65 and over require aid and attendance, less than 15 percent of younger pensioners do. Only a small number of pensioners reside in a nursing home—less than one-half of 1 percent of pensioners under age 65 and 3 percent of pensioners age 65 and over. In a 2002 VA study, about 95 percent of veterans and 87 percent of surviving spouses reported having some form of health insurance. Medicare was a source of coverage for 50 percent of veterans and 69 percent of surviving spouses. Over half of veterans also reported health care coverage through VA or military hospitals. Less than 15 percent of veterans relied on Medicaid for their health insurance and less than 15 percent of veterans had private health insurance. By contrast, 47 percent of surviving spouses relied on Medicaid, making it the second most common source of coverage after Medicare, while 18 percent reported having private insurance. As shown in table 2, the total number of pensioners has been decreasing in recent years, although the number of pensioners from more recent service periods has been increasing. In 2006, the three pension programs served about 535,000 veterans and survivors, a 75 percent decrease from 1978, when participation peaked at almost 2 million. VA attributes the overall reduction in numbers largely to the death of World War II era pensioners and the greater availability of Social Security retirement benefits, which often raise veterans’ incomes above the VA pension program’s eligibility levels. While total program enrollment has declined, the Vietnam era and Gulf War cohorts of pensioners have increased in number in recent years, as shown in table 3. VA expects the Vietnam era cohort to continue to increase in number as more Vietnam era veterans meet the 65-year age requirement for pension eligibility. An estimated 5.2 million of these veterans will be age 65 or older in the year 2015. However, VA has no estimate of how many also will have qualifying wartime service, meet income requirements, and submit pension claims. Further, the number of Gulf War veterans receiving pensions increased by nearly 300 percent between 2000 and 2006, although they are still less than 5,400 in number. Even with increases in these two cohorts, VA estimates that the total pensioner population will continue to decline in size through at least 2017. The main reason for caseload termination is death, followed by increased income, as shown in figure 4. In January 2007, about 70 percent of veteran pension cases and 50 percent of surviving spouse cases were terminated as a result of the death of the pensioner. Increased income was the reason for termination in about a fifth of veteran cases and in about two-fifths of surviving spouse cases. For surviving children, however, about two-thirds of the cases were terminated due to the child’s age, and one in five was because of a death. VA policies and procedures are not sufficient to ensure sound decisions on new pension claims. Unlike other federal agencies with similar income- based programs, VA largely does not independently verify the accuracy of financial information provided by claimants to support initial pension program eligibility, a fact that makes the pension program vulnerable to improper payments. In addition, the guidance used by staff to make pension eligibility decisions, which is under revision and dispersed across several sources, is not always current or clear. Further, VA’s quality assurance review process for initial claims does not select a sufficient number of pension cases to examine to ensure the accuracy of pension decisions. Finally, VA does not adequately evaluate pension training. For example, VA does not systematically collect feedback from participants at the end of a training course. VA does not require new pension applicants to submit documents that would support their declarations of income and assets. While the agency does corroborate their reported Social Security income with SSA records, it does not require claimants to submit evidence for other financial resources, such as by asking for copies of bank statements or tax returns. Furthermore, until recently, there was no legislative authority for VA to arrange for the cross-checking of claimants’ statements of non-Social Security income against the Department of Health and Human Service’s (HHS) National Directory of New Hires (NDNH). The NDNH includes quarterly wage data for up to eight quarters, which can be compiled into annual data for matching purposes. HHS conducts matches using the NDNH for other agencies—such as SSA, IRS, and the Department of Housing and Urban Development to assist them in improving their enforcement efforts. The Dr. James Allen Veteran Vision Equity Act of 2007, which became effective December 26, 2007, requires VA to provide applicant information to HHS and requires HHS to match it against the NDNH and disclose to VA information for verifying applicant employment and income. Furthermore, while VA allows income deductions for certain unreimbursed medical expenses, the agency does not always require documentation for these expenses. For example, VA requires documentation for the costs of nursing home care, but not for the cost of prescription drugs. In contrast, VA does require applicants to verify some nonfinancial information, such as by submitting an official notice of military separation or marriage and divorce records. We found that staff in the regional offices we visited used guidance that was not compiled in a single source; not always current; and, according to those we spoke with, unclear. This may be, in part, because VA has been in the process, since 2001, of revising various sections of its compensation and pension manual to help clarify complex regulations. However, staff told us that while the revisions are taking place, they must check a variety of sources for updates, including e-mails and posted memos, to be certain they have the most current version for a specific procedure. Staff also said the piecemeal and dispersed nature of the guidance can lead to different interpretations for pension eligibility decisions. VA expects to have most of the revisions implemented by 2009. Finally, staff also expressed a concern over the clarity of some pension guidance, which they said both leaves too much room for interpretation and can result in inconsistent decisions on eligibility. For example, some said they must interpret ambiguous guidance when determining how to treat claimants in assisted living centers versus nursing homes, and that it is possible for staff to reach different conclusions about a claimant’s eligibility or proper benefit. They told us that when faced with unclear guidance, staff are expected to use their own discretion in interpreting the guidance, along with the advice of supervisory staff, which they believe can vary. The internal controls VA employs to evaluate the quality of initial pension decisions are insufficient because VA reviews only a very small random sample of initial claims that are selected from compensation and pension cases. Since pension claims constitute only about 11 percent of the combined compensation and pension caseload, few are likely to be included in the quality assurance review sample. Although VA reported about a 12 percent error rate for compensation and pension claims combined, a recent VA Inspector General (IG) study found a higher incidence of errors in some cases that subsequently required pension payment adjustments. Specifically, the IG reported that VA procedures did not ensure that the benefits of veterans hospitalized for more than 90 days were appropriately adjusted. VA’s training program does not include a comprehensive evaluation to ensure its effectiveness. Although we have found that evaluation is essential to performance, VA does not systematically collect feedback from participants at the end of a training course. For example, new staff receiving training at VA’s training center in Baltimore are required to submit their evaluation of the training. By comparison, staff receiving training in the use of an electronic Web tool are not required to evaluate the training. When we discussed VA’s limited evaluation of training with headquarters officials, they noted that the agency has made assessments of new training materials before they are put into place. However, VA has not consistently evaluated all training courses offered at the regional offices. Many staffers told us that some of their training is repetitive or does not include updates and revisions in procedures. VA procedures for assessing whether pensioners continue to receive the proper benefits have significant limitations because VA does not require pensioners to submit financial documentation, conducts untimely and inefficient verification of pensioners’ incomes and assets, and lacks a system for identifying and reducing improper pension benefits. Although the agency requires all pensioners to submit documentation for nonfinancial changes, such as for marriages or deaths, it does not require documentation such as bank or asset statements when pensioners report financial changes. Also, while the agency does verify certain pensioner information by comparing it with data from other federal agencies, we found that a key procedure using SSA and IRS data is not conducted in a timely or efficient manner. Finally, despite millions of dollars in improper payments made each year, VA does not collect sufficient data on causes of improper payments that could be used to help it better manage the pension program. Although VA requires all pensioners to report changes that might affect their payments or eligibility, it does not require them to submit documentation to corroborate changes in financial circumstances. Whereas VA does require pensioners to submit documents for such changes in circumstances as marriage or spousal deaths, it does not do so for reported changes in income or assets. This leaves the agency heavily dependent on the pensioner for self-reporting financial updates. The exception is Social Security income, which VA staff verify using a direct computer link to SSA benefit data. Pensioner reports of other financial changes, however, need not be accompanied by such documents as pay stubs, bank statements, or tax returns. This is in contrast to other income- based federal programs that typically ask for verification of key financial information. For example, SSA requires Supplemental Security Insurance (SSI) recipients to document their earnings on a regular basis. Similarly, VA does not ask pensioners for financial documentation when completing the annual Eligibility Verification Report (EVR), the requested update that the agency sends out annually to those who have previously reported having income and assets other than their pensions and Social Security benefits. VA uses information collected from the EVR responses to adjust or, if necessary, terminate VA pension benefits. (See table 4 for an overview of the EVR steps.) VA has indicated that the pension program is prone to overpayments caused by pensioners failing to report income changes as they occur. One agency official responsible for the management of the EVR process told us that the EVR process provides the agency an opportunity to adjust the pension benefits as pensioner status changes, thus preventing higher overpayments. In addition to VA not asking respondents to submit financial documentation, we found a number of other deficiencies in the EVR process that may likely limit VA’s ability to make timely and accurate adjustments to benefits. First, the EVR process only queries pensioners who have previously reported income to VA, so the agency fails to reach pensioners who may acquire new sources of income, such as earnings from new employment. Second, during the annual review of EVRs, VA does not attempt to corroborate via independent third-party sources any information that pensioners report on the EVR beyond SSA benefits. Third, the EVR asks pensioners to estimate their income for the year, and the agency adjusts their pensions based on these estimates. Yet pensioner estimates could prove to be incorrect. For example, income estimates for the coming year could change due to the death of a spouse. Fourth, the narrow seasonal window within which pension maintenance center (PMC) staff attempt to review and process the EVRs—the first 3 months of the year—postpones other pension-related activities, including data comparisons with other federal agencies that could provide third-party verification. This limited approach to verification of pensioner-provided updates puts the program at risk of issuing improper payments based on pensioners’ reports. In addition to deficiencies in the EVR process, VA’s ability to detect improper payments is hindered by untimely processing of a key data match. Through formal agreements, VA compares its electronic data on pensioners’ reported income, and other eligibility information, to similar information from a number of federal agencies, as shown in table 5. VA identifies discrepancies and, after due process, uses this information to adjust or terminate pension benefits. VA projects that its data-matching activities save the pension program millions of dollars annually. Nevertheless, VA’s key data match, the Income Verification Match program (IVM), which uses SSA and IRS data to detect pensioners who have failed to report earned or unearned income, is running far behind schedule. This match is scheduled to occur on an annual basis. However, since 2006, VA has attempted to become current by conducting 2 years’ worth of IVM data matches simultaneously. Yet combining data match records for multiple years has added to the complexity and the length of case evaluations, according to officials we interviewed. VA estimates that the IVM data match has the potential of saving the program over $10 million each year, which suggests that a delay of 2 years could delay saving as much as $25 million dollars in payment errors. Such errors include both underpayments, which require VA staff to make retroactive payments or benefit adjustments, and overpayments, which can be burdensome for VA to recover from such a low-income population of pensioners and which threaten the financial stability of pensioners. While VA officials told us they plan to have all IVM data match cases entered into their system by the end of 2007, they could not provide assurance that these cases will be completed in the same year. The effectiveness of the IVM is also undermined by the fact that the data used are not current. VA uses income data that are about 2 years old, despite the fact that the SSA data are available earlier and that more recent earned income data are available from another federal database, the NDNH. Moreover, while VA could make use of SSA earned income data as early as September following the end of the tax year, it postpones the match of pension benefits and waits for unearned income from IRS so that it can simultaneously evaluate eligibility for pension and Individual Unemployability (IU) benefits. We determined that the combination of old earned income data, along with the delays noted above, means that individuals with unreported earned income could continue to receive benefits for at least 2 years before VA can determine that their pension benefits should be adjusted or terminated. We also found that VA’s handling of the IVM data match results is inefficient because the dollar threshold used to select pensioners for review of ongoing pension eligibility has not kept pace over nearly two decades with wages necessary for a person to sustain a living. Specifically, the IVM data match threshold in use until December 2006—which VA told us was meant to represent wages for marginal employment—was set in 1988 (when VA’s maximum annual pension rate for a veteran with no dependents was $6,463). Yet between 1988 and 2006, the U.S. Census Bureau’s poverty threshold had increased over 70 percent. As a result of VA’s continued reliance upon the 1988 threshold, PMC staff told us that they manually reviewed many more cases than necessary in order to find and delete those cases that did not warrant a review. In fact, staff typically eliminated about one-third of the initial matches because their combined countable income and pension did not exceed their maximum allowable pension benefit. VA increased the $6,000 threshold to $9,383 in December 2006, but has not decided whether it would make such changes to update the threshold on a regular basis in the future. The timeliness of the IVM process is also hampered by the fact that VA’s match records are paper based and lack the organizing and transmission efficiencies of an electronic database. Therefore, VA’s follow-up work is slowed by the need to ship boxes of uncollated paper records— worksheets, correspondence with pensioners and employers, and related documents—from the Hines center to the PMCs for manual assembly and analysis. Although software exists to transfer the confidential information electronically, VA officials said the need to comply with IRS’s specific security measures for income-related data has precluded VA’s use of it. However, we have noted in prior work that electronic software exists to transfer confidential information securely. Many millions of dollars in improper payments accrue each year before they are discovered and corrected, but VA has not taken steps to identify the causes of these improper payments. According to VA estimates, about 8.4 to 11 percent of pension program payments were made in error each year between 2003 and 2006, with most of these being overpayments, as shown in table 6. Specifically, VA estimated a total of almost $1.2 billion in overpayments and about $44 million in underpayments during this period. Such payment errors, particularly underpayments, can have a negative impact on pensioners living on very low incomes. By contrast, the agency has estimated that only about 1 percent of disability compensation program payments were made in error, with about $730 million in overpayments and $375 million in underpayments for the same period. Furthermore, according to VA officials, less than 30 percent of the overpayments for compensation and pension are recovered. We found that VA has not been successful in collecting overpayments. In fact, a significant portion of overpayments are written off because of pensioners’ death or bankruptcy, or because the overpayment is considered uncollectible. For example, in cases where VA’s Committee on Waivers concludes that repayment of the overpayment debt would cause undue financial hardship for the pensioner, the debt may be waived. Most of the remaining overpayments remain on VA’s financial account for up to 10 years, at which time VA discontinues its collection efforts. Despite VA’s estimates of relatively high amounts in improper payments for the pension program, the agency lacks a process for determining the nature and actual extent of payment errors. For example, VA currently does not have the ability to identify the dollar amount of overpayments generated because of a failure to report income, gambling winnings, or death of a spouse. Likewise, VA cannot identify the amount of improper payments caused by administrative factors, such as those associated with VA delays in conducting the IVM data matches. Although the agency has identified several causes for improper payments, such as inaccurate reporting of Social Security benefits, remarriage of surviving spouse, or change in dependents, it has not developed a system to assess the degree to which they occur, and therefore cannot develop strategies to target problems and take steps to correct them. VA’s pension eligibility operations currently provide a limited return on investment when it comes to making appropriate payments. The agency’s financial verification processes are heavily weighted toward detection of payment errors late in the process rather than up-front prevention of errors. By not requiring supporting financial documentation from individuals at the time of application, and relying much later in the process on untimely data matching for this information, VA increases the likelihood of making improper payments. Additionally, VA concentrates staff resources in the first 3 months of each year on a process for updating eligibility that is time-consuming and still lacks for financial documentation or independent verifications. Meanwhile, the data- matching operations that do offer third-party verifications on financial status are delayed until after the EVR process is completed. A key data match procedure, the IVM, has been delayed in the past, and the agency continues to be at risk of making improper payments as it tries to rectify the situation. Even when the IVM data match becomes current, VA’s ability to use it in a timely way will be limited if processing of the information remains grounded in a manual handling of paper documents. Moreover, if VA does not regularly update its IVM data match threshold in its incoming financial information system, the agency will continue to have an inefficient procedure for selecting cases for IVM review. In view of the fact that the program incurs a proportionately high level of improper payments—about 8.4 to 11 percent of annual program benefits— VA’s investment in the prevention of errors seems too modest. VA could readily justify increasing its investment in the error prevention process. The agency does not have a quality assurance process robust enough to ensure with a high degree of confidence the accuracy of decisions made in initial pension cases, it has not developed a consistently rigorous approach to the evaluation of training, and it does little to identify and analyze the causes of improper payments. Taken together, VA forfeits the opportunity to address administrative weaknesses and prevent payment errors. Unless these weaknesses are addressed, VA pension benefit payments will remain vulnerable to a relatively high rate of errors. Reconciling improper payments draws resources from the agency that could be better utilized elsewhere, and to the extent overpayments are never collected, they undermine the agency’s financial stewardship over public funds. Moreover, underpayments can lead to hardships among this low-income segment of the veteran population. Certainly, the program is not too small to ensure that appropriate levels of assistance are provided to a vulnerable and deserving population. This can and should be done with a level of efficiency that minimally burdens all concerned. In order to enhance VA’s management of the pension eligibility process, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits take the following actions. 1. Take steps to make more accurate and timely decisions about initial and ongoing pension eligibility and payment levels. Such actions might include requiring pensioners to submit supporting documentation of their income and assets, conducting a more robust independent verification with third parties, or maximizing process automation. 2. Take steps to improve its quality assurance review of initial claims, which could include reviewing a larger sample of pension claims. 3. Incorporate evaluative feedback more consistently into the planning, design, and implementation of its training efforts. 4. Evaluate options for improving the effectiveness of its annual eligibility verification review. This effort could involve reformulating the EVR process by (a) surveying every pensioner rather than a selected subset; (b) performing reviews on a rolling basis, such as on an individual’s anniversary date, rather than diverting staff for this activity for a 3-month period; (c) reviewing pensioners’ eligibility once every few years rather than every year; or (d) focusing on verifying pensioners’ income and assets. 5. Update the IVM data match threshold level to be in line with the U.S. Census Bureau poverty threshold or a comparable measure. 6. Evaluate the causes of improper payments and use the results to develop and implement an action plan to prevent them from occurring. We provided a draft of this report to the Secretary of Veterans Affairs for review and comment. In its written comments on a draft of this report (see app. III), VA agreed in part or fully with our recommendations. Although VA raised concern with some of the options we present to help implement several of the recommendations, the agency’s comments indicate it will take steps to implement these recommendations. Regarding our recommendation that VA should take steps to make more accurate and timely pension eligibility and payment decisions, the agency agreed in part but took exception with several of the actions we suggested. VA noted that requiring pensioners to provide documentation of income and net worth could be onerous to individuals and possibly diminish the timeliness of initial pension eligibility decisions. However, VA also stated elsewhere in its comments that pensioners use the end-of-year tax statements (Form 1099) to accurately report income from all sources. This indicates that it may not be an added burden to pensioners to include a Form 1099 in their initial claim. But to the extent that burden is added to preparing the initial application, the possible gains in accuracy and avoidance of corrective actions later taken to address improper payments need to be considered. Additionally, VA cited its data-matching process with SSA as an existing mechanism to independently verify income. However, as we point out in this report, other federal agencies with similar income-based programs independently verify self-reported information to support initial eligibility decisions. By verifying claims up front, rather than years after the claim is established, the agency can effectively save the program millions of dollars that it might never recover otherwise. Moreover, VA does not always require documentation of unreimbursed medical expenses, which can be deducted from income for pension eligibility. It is important for VA to gather supporting documentation in this area as well. The agency may consider piloting the feasibility of requiring additional documentation of key financial information. Regarding our recommendation that VA take steps to improve its quality assurance review of initial claims, which could include reviewing a larger sample of pension claims, the agency agreed with this recommendation and indicated it will double the entire rating sample size in 2008. While we commend VA’s effort to increase the overall number of claims reviewed in its quality assurance review of rating-related decisions, we remain concerned that this approach will not ensure that enough initial pension claims are reviewed for quality assurance. As we point out in this report, VA samples 10 claims from most regional offices’ caseload of compensation and initial pension claims. Given that initial pension claims constitute about 11 percent of the caseload, a regional office, on average, would likely have only 1 pension claim selected for review. Doubling the sample size would increase the expected number of claims to 2, which we believe is too few. Alternatively, VA might consider increasing the number of pension claims in the overall sample, such as weighing the sample size to include more pension claims or conducting stand-alone reviews of pension claims. The agency agreed with our recommendation to incorporate evaluative feedback more consistently into the planning, design, and implementation of its training efforts. The agency agreed with our recommendation that VA evaluate options for improving the effectiveness of its annual eligibility verification review. The agency stated there are inconsistencies between the possibilities we present, though it did not elaborate. Our intent in presenting these options is to stimulate actions that VA may take to improve the effectiveness of the EVR process. The agency discussed issues—such as veterans’ ease in providing corroborating information—that need to be considered as it moves forward. The agency agreed with our recommendation that it update the IVM data match threshold level to be in line with the U. S. Census Bureau poverty threshold or a comparable measure. The agency agreed with our recommendation that it evaluate the causes of improper payments and use the results to develop and implement an action plan to prevent them from occurring. In the version of the draft report sent to VA for review, we recommended that VA seek legislative authority to use the NDNH in its enforcement efforts. However, we have withdrawn this draft recommendation because VA has been mandated to use this database effective December 2007. In its comments on this draft recommendation, VA indicated that it has initiated the process necessary to gain access to the NDNH earnings database. Given this development, we encourage VA to act swiftly to position itself to fully utilize the NDNH in a timely manner. We will send copies of this report to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. The report will also be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix IV. The Senate Veterans’ Affairs Committee asked GAO to (1) determine the characteristics and trends in size of the pensioner population, (2) assess the policies and procedures the Department of Veterans Affairs (VA) has in place to ensure that initial pension eligibility decisions are well managed, and (3) assess the procedures VA has in place to ensure that pensioners continue to receive the proper benefit payments. To determine the characteristics and size of the pensioner population, we analyzed data from VA’s Benefit Delivery Network (BDN), VA budget reports, and other reports through fiscal year 2006. To address our remaining objectives, we reviewed relevant laws, guidance, procedures, and internal controls that VA uses to ensure the soundness of pension benefit decisions. We also analyzed VA’s internal control policies and performance reports. We visited 4 of VA’s 57 regional offices, located in Boston, Milwaukee, Providence, and St. Paul. We selected these sites based on variations in size and geographic locations. We also visited VA’s three pension maintenance centers (PMC), located in Philadelphia, Milwaukee, and St. Paul, and the Debt Management Center in St. Paul. We interviewed VA officials and staff at these sites as well as officials at VA Central Office in Washington, D.C. We also conducted case file reviews in three locations—Milwaukee, Providence, and St. Paul—to verify the adequacy of documentation in support of initial pension decisions. We conducted our review from November 2006 to February 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To analyze the characteristics of pensioners, we extracted data on basic characteristics and enrollment trends from VA’s Benefit Delivery Network and routine annual VA budget submissions. We supplemented our analysis with additional data from a study contracted by VA and conducted by ORC Macro. VA maintains data on pensioners in its Benefits Delivery Network at the Hines Information Technology Center in Illinois. Hines issues routine reports to various units in VA, including budget and policy, for use in program management. Most of these reports provide characteristics for two or three factors, for example, for veterans by age grouping and by state of residence. We based our analysis on year-end reports through fiscal year 2006. To obtain data not available from published VA reports, we analyzed additional BDN data as of September 2006. VA regularly provides GAO with two sources of BDN data. The first provides data on a few characteristics of the entire universe of pensioners. The other provides data on a broader range of characteristics for a random sample of 5 percent of all pensioners. We used data from the entire universe of cases when they were available and supplemented our analysis with additional data from the random sample. When reporting data from the 5 percent sample, we report confidence intervals around point estimates. We also analyzed data extracted from tables in the fiscal year 2002 through 2008 budget submissions. In 2002, VA contracted with ORC Macro to survey a sample of pensioners that included veterans and surviving spouses. For more detailed data not available from VA’s BDN, we used data reported in the ORC Macro report. However, data are not generalizable to current pensioners because of the time frame when the survey was conducted. The ORC Macro study was a telephone-administered survey. The survey sample generally mirrored the pensioner population at the time. However, for the sample of veterans, nursing home residents were underrepresented. In addition, the spouse sample and spouse recently enrolling sample underrepresented spouses receiving aid or attendance services. In all three cases, ORC Macro conducted a sensitivity analysis to find out whether these underrepresented groups would have changed key question response frequencies if they were correctly represented. In each of the three cases, ORC found they would not significantly change response frequencies. GAO’s applied research and methods group evaluated the BDN and ORC Macro survey data and found the information to be sufficiently reliable for our purposes. To verify the adequacy of documentation in support of initial pension decisions, we reviewed case files on a random sample of Improved Pension claims and Death Pension claims that had been completed by three VA regional offices, in Milwaukee, St. Paul, and Providence. We chose Milwaukee and St. Paul because they were associated with a PMC. Because the Milwaukee site processes new claims in both the regional office and the PMC, we included files that originated at both locations in our review. Additionally, because the St. Paul office completed pension cases that had been transferred from another regional office, we included those cases in our review. We chose the Providence regional office because it was not associated with a PMC and because of its geographic location. At the three sites we reviewed a total of 72 case files for completed initial pension claims. In order to select case files to review, VA provided a list of all initial (or original) claims for Improved Pension (for veterans) and Death Pension (for survivors) considered and determined between February 1, 2007, and February 28, 2007. To select from this list, which included both approved and denied cases, we used random number assignment procedures to help ensure a broad range of these types of cases were included in our sample. Specifically, at St. Paul, we used a randomized ordering process to select and review 32 case files. On subsequent site visits, we randomized all EP 180 and EP 190 claims and reviewed the first available 14 EP 180 and 6 EP 190 case files. In some instances, case files we selected were unavailable because they were being reviewed at the regional office or were categorized as pension claims but over the course of VA developing the claim no longer met our selection criteria of initial pension claims. We attempted to review a 2-to-1 ratio of EP 180 to EP 190 cases, which is approximately the ratio that exists in the pension claims population. The number of case files we reviewed at each site is given in table 7. To ensure consistency, we used a standardized checklist to examine case files at each location, and the same individuals conducted the reviews at all three sites. The checklist, which was developed by examining the procedural guidance and case files at a regional office, included information about each of the major eligibility criteria: military service, disability, net worth, and income. In addition, we reviewed demographic information about the claimants. We also looked for evidence that VA had followed guidance on procedures, for example, whether letters were sent to the claimant appropriately. At each of the sites we communicated with VA office management when questions about the files we were reviewing arose. In some cases these conversations helped us better understand why a case had been handled in a particular way, while in other cases management acknowledged that errors had been made. The results of visits to the sample of regional offices are not generalizable to all 57 regional offices; similarly, results of case file reviews are not generalizable to all pension files. The testimonial evidence, such as that gathered during interviews with staff, also is not generalizable. VA’s Compensation and Pensions (C&P) Service administers the pension program and oversees the operation of 57 regional offices and three pension maintenance centers (PMC). The C&P central office has responsibility for managing policy and procedures, guidance, quality assurance, and general operations. The regional offices process initial pension applications and determine eligibility. The offices have jurisdiction in the geographic area where a veteran or survivor lives. At least one regional office is located in every state except Wyoming, as well as the District of Columbia, Puerto Rico, and the Philippines. The PMCs are responsible for conducting annual reviews of eligibility and adjusting benefit levels when a pensioner’s circumstances change. As part of the annual review, staff at the PMCs examine annual reports from pensioners, which contain information on, for example, income and medical expenses. Staff also examine the results of a variety of computerized data matches with other government agencies to determine whether adjustments are warranted. Additionally, VA’s Hines Information Technology Center handles all computer transactions that affect benefit payment levels, and VA’s Debt Management Center is responsible for collecting overpayments that occur. To determine a veteran’s initial eligibility for the pension program, VA’s regional office staff review several eligibility criteria, including the veteran’s military status, age or disability, and income. For the surviving spouses and children of such veterans, VA uses similar factors to determine eligibility, as shown in figure 5. All survivors are subject to income and asset limits, but only surviving children must meet disability or age requirements. To qualify for the pension program, veterans must meet one of four military service criteria set out in statute, but in practice they have been collapsed into a single requirement. The VA characterizes these criteria as generally requiring that veterans have served on active duty in the military, naval, or air forces for 90 days or more, with at least one of those days being during a period of war, and been discharged under conditions other than dishonorable. In addition, veterans who enlisted after September 7, 1980, generally must have served at least 24 months or the full period for which called or ordered to active duty to qualify. Eligibility for pension benefits is restricted to veterans who are either totally and permanently disabled due to circumstances unrelated to their military service or to willful misconduct, or who are at least age 65. Veterans under age 65 must be considered totally and permanently disabled, which means that the veteran is unable to pursue substantially gainful employment due to a disabling condition and that the condition is reasonably certain to continue throughout the veteran’s life. Veterans who receive Social Security disability benefits or long-term care in nursing homes are presumed to be totally and permanently disabled. In determining eligibility, VA considers the income of all family members, including spouses and children, but excludes other individuals residing in the household. Pensioners meet income eligibility requirements if their family incomes are less than the maximum annual pension amounts established annually by Congress. Various sources of income are considered when determining eligibility, including employment, interest and dividends, retirement, annuities, workers’ compensation, and Social Security Disability Insurance benefits. Income from other federal or state means-tested programs—such as Temporary Assistance to Needy Families, food stamps, housing assistance, or Supplemental Security Income—is not counted toward family income. Pensioners may deduct unreimbursed medical expenses that exceed 5 percent of the maximum annual pension amount from their income for the purposes of determining eligibility. VA does not apply specific limits on the net worth of assets when determining eligibility for pensions. However, veterans will not be awarded a pension if VA determines that they have sufficient assets to live on for a reasonable period of time. To make this determination, VA guidance calls for net worth in excess of $80,000 to be reviewed during the initial eligibility determination process, taking into consideration such factors as the veteran’s life expectancy and the convertibility of the assets into cash. Net worth includes all personal property and real estate owned by veterans and their families, excluding the value of their home and the land on which their homes are located. When a veteran with qualifying wartime service dies, his survivors may be entitled to a pension. The veteran does not have to be receiving a pension at the time of his death in order for his survivors to be eligible for benefits. Survivors are not required to have a disability in order to qualify, but they must meet income and asset requirements. For spouses, there is no age requirement, but they lose eligibility if they remarry. Generally children under age 18 are eligible, as are those under 23 who are in school. Older unmarried children may also be eligible, but only if they are incapable of self-support and if this incapacity occurred prior to their reaching age 18. Once pensioners have been awarded benefits, VA makes ongoing eligibility determinations and adjusts benefit levels as needed. Pensioners are required to inform VA of any changes in entitlement factors that could affect their eligibility or benefit levels as soon as they occur. Hospitalization or incarceration, as well as changes in income, assets, or marital status, can affect the continued eligibility of pensioners or result in adjustments to the amount of payments that they receive. Any changes in circumstances must be reported to VA in writing as soon as they occur, and VA is required to make any necessary adjustments. To further assess ongoing eligibility and benefit levels, VA also requires pensioners who have any income other than Social Security to file an annual report with VA. Then, VA evaluates the information in this report to determine if pensioners continue to meet income eligibility requirements. The following individuals made important contributions to the report: Brett Fallavollita, Assistant Director; Anna Kelley; Scott Purdy; Shannon Diamant; as well as Susan Bernstein; Pat Elston; Lara Laufer; Wayne Turowski; Walter Vance; Joan Vogel; and Craig Winslow. Veterans Affairs: Continued Focus on Critical Success Factors Is Essential to Achieving Information Technology Realignment. GAO-07-844. Washington, D.C.: June 15, 2007. Veterans Benefits Administration: Progress Made in Long-Term Effort to Replace Benefits Payment System, but Challenges Persist. GAO-07-614. Washington, D.C.: April 27, 2007. Veterans’ Disability Benefits: Processing of Claims Continues to Present Challenges. GAO-07-562T. Washington, D.C.: March 13, 2007. Veterans’ Disability Benefits: VA Can Improve Its Procedures for Obtaining Military Service Records. GAO-07-98. Washington, D.C.: December 12, 2006. Veterans’ Disability Benefits: VA Should Improve Its Management of Individual Unemployability Benefits by Strengthening Criteria, Guidance, and Procedures. GAO-06-309. Washington, D.C.: May 30, 2006. Veterans’ Benefits: Further Changes in VBA’s Field Office Structure Could Help Improve Disability Claims Processing. GAO-06-149. Washington, D.C.: December 9, 2005. Veterans’ Benefits: VA Needs Plan for Assessing Consistency of Decisions. GAO-05-99. Washington, D.C.: November 19, 2004. Veterans’ Benefits: More Transparency Needed to Improve Oversight of VBA’s Compensation and Pension Staffing Levels. GAO-05-47. Washington, D.C.: November 15, 2004. Veterans’ Benefits: Improvements Needed in the Reporting and Use of Data on the Accuracy of Disability Claims Decisions. GAO-03-1045. Washington, D.C.: September 30, 2003. Veterans Benefits Administration: Process for Preventing Improper Payments to Deceased Veterans Can Be Improved. GAO-03-906. Washington, D.C.: July 24, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 1, 2003. Veterans’ Benefits: Claims Processing Timeliness Performance Measures Could Be Improved. GAO-03-282. Washington, D.C.: December 19, 2002. Veterans’ Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. Washington, D.C.: August 16, 2002. Veterans’ Benefits: Despite Recent Improvements, Meeting Claims Processing Goals Will Be Challenging. GAO-02-645T. Washington, D.C.: April 26, 2002.
In 2006, the Department of Veterans Affairs (VA) paid about $3.5 billion in means-tested pension benefits to over 500,000 veterans and survivors. GAO was asked to review the management of VA pension program. This report assesses (1) the characteristics and trends in size of the current pensioner population, (2) the policies and procedures VA has in place to ensure that initial pension eligibility decisions are well managed, and (3) the procedures VA has in place to ensure that pensioners continue to receive the proper benefit payments on an ongoing basis. Our study included reviews of agency policies, procedures, and internal controls; site visits to 4 of VA's 57 regional offices and all three of its pension maintenance centers; and a selected file review of new claims at three locations. In 2006, most of the over 500,000 VA pensioners had nonpension incomes well below the federal poverty level, were beyond retirement age, and had multiple impairments, and the population has been decreasing in number. The average annual reported income of these pensioners, excluding their VA pensions, was less than $5,000. The average age of VA pensioners was 70. More than 80 percent had no spouse or dependent children. Three-fourths of veteran pensioners had multiple impairments. After reaching a peak of almost 2 million in 1978, the overall size of the pensioner population has gradually decreased, although the number of pensioners from more recent service periods has been increasing. VA policies and procedures are not sufficient to ensure sound decisions on new pension claims. Unlike other federal agencies with similar income-based programs, VA largely does not independently verify the accuracy of financial information provided by claimants to support initial pension program eligibility. In addition, the guidance used by staff to make pension eligibility decisions is not always current or clear. Further, VA's quality assurance review process for initial claims does not select a sufficient number of pension cases to ensure the accuracy of pension claims decisions. Finally, VA does not adequately evaluate training for pension staff. VA procedures for assessing whether pensioners continue to receive the proper benefits have significant limitations. Although the agency requires pensioners to report changes that might affect their pensions, VA does not require documentation such as bank or asset statements when pensioners report financial changes. Also, a key data match operation with the Internal Revenue Service is not conducted in a timely or efficient manner. Finally, despite millions of dollars in improper pension payments made each year, VA lacks a system to monitor and analyze their causes.
Given the complexity of the supply chain and the vast number of cargo containers that are shipped to the United States, the supply chain is vulnerable to threats. The typical supply chain process for transporting cargo containers to the United States involves many steps and participants. The cargo container, and material in it, can be affected not only by the manufacturer or supplier of the material being shipped, but also by vessel carriers who are responsible for transporting the material to a port, as well as by personnel who load and unload cargo containers onto vessels. Others who may interact with the cargo or have access to the records of the goods being shipped include exporters who make arrangements for shipping and loading, freight consolidators who package disparate cargo into containers, and forwarders who manage and process the information about what is being loaded onto a vessel. Figure 1 depicts the key participants and points of transfer involved in the supply chain— from the time that a container is packed with cargo in a foreign location to its arrival at a U.S. port. CBP has developed a layered security strategy to mitigate the risk of an attack using cargo containers. CBP’s strategy is based on a layered approach of related programs that attempt to focus resources on potentially risky cargo shipped in containers while allowing other cargo containers to proceed without unduly disrupting commerce into the United States. The strategy is based on obtaining advanced cargo information to identify high-risk containers, utilizing technology to inspect containers, and partnering with foreign governments and the trade industry. A brief description of the core programs that comprise CBP’s layered security strategy for cargo containers is provided in table 1. Several U.S. laws and regulations govern the security of cargo containers and the supply chain within which they are transported. In 2006, Congress passed, and the President signed, the Security and Accountability for Every (SAFE) Port Act. The SAFE Port Act established a statutory framework for some of the programs comprising CBP’s layered security strategy, including CSI and C-TPAT, which previously had been agency programs not required by law. The SAFE Port Act also required that DHS initiate a rulemaking process and subsequently issue an interim final rule to establish minimum standards and procedures for securing containers in transit to the United States. In August 2007, the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Act) was enacted, amending this SAFE Port Act requirement. Specifically, the 9/11 Act required that if the interim final rule was not issued by April 1, 2008, then effective no later than October 15, 2008, all containers in transit to the United States would be required to use an ISO 17712 compliant seal. DHS did not establish standards by the set deadline, so all maritime containers in transit to the United States are now required to be sealed with an ISO 17712 compliant seal. According to DHS, it did not establish minimum standards for securing cargo containers in transit because there were no available technology solutions at the time that would adequately improve container security without significantly disrupting the flow of commerce. Although the 9/11 Act default standard is now in effect, the act provides that this standard will cease to be effective upon the effective date of a rule issued in the future pursuant to the original SAFE Port Act requirement. In addition to the possibility of a future rulemaking in this area, DHS remains responsible for implementing an earlier provision enacted by the Maritime Transportation Security Act of 2002 (MTSA). This provision requires DHS to establish a program to evaluate and certify secure systems of international, intermodal transportation. This program is to include standards and procedures for securing cargo and monitoring security while in transit, as well as performance standards to enhance the physical security of shipping containers, including standards for seals and locks. This provision continues to govern DHS efforts to establish standards for new technology in the cargo container security area. In response to a July 2002 memo from the then-CBP Commissioner, CBP undertook a study to identify and evaluate available technologies to improve container security. The study demonstrated that existing container seals provided inadequate security against physical intrusions. We reported in January 2006 that despite the widespread use of container seals, they are not effective in preventing tampering. For example, entry into a container through the roof or sides will not be indicated by a container seal affixed to the doors. Further, various methods to circumvent seals installed on container door hasps (see fig. 3) have been demonstrated by the Department of Defense and the Vulnerability Assessment Team at Los Alamos National Laboratory. Seals installed through the door hasp can be bypassed and left intact by simply removing an entire container door. Recognizing the limitations of existing container technology, CBP desired a technology with the ability to monitor and record door openings and eventually detect and report intrusions on all six sides of a container. Figure 3 shows a container with a bolt seal affixed to the door hasp. CBP initiated the Smart Box program in 2004 in order to develop technologies with the ability to monitor the physical integrity of a container, among other things. In September 2005, CBP, in consultation with Johns Hopkins University Applied Physics Laboratory, determined through operational testing that there was no existing container security device that could meet its requirements. CBP made a second attempt, in December 2007, to find a commercially available container security device with the ability to monitor container doors for intrusion. According to CBP officials, only one security device—offered by General Electric— demonstrated the potential to meet CBP’s requirements. However, according to CBP, subsequent operational testing revealed that the device had a relatively high false alarm rate, which, according to CBP officials, would have resulted in an unmanageable workload for CBP staff at ports given the number of containers they would have to examine because of the alarms. According to CBP officials, before they could schedule another round of testing to determine if a revised prototype of the device would meet CBP’s requirements, General Electric decided to stop producing the device. S&T is developing four container security technologies, which are described in table 2, in response to MTSA requirements and CBP’s need for container security technologies with the ability to detect intrusion and track the movement of containers through the supply chain. In May 2004, S&T issued a broad agency announcement for the Advanced Container Security Device (ACSD) project seeking industry submissions for technologies that could be developed to provide six-sided intrusion detection for cargo containers. The initial results of ACSD testing demonstrated that a solution would require years of additional investment and development. As a result of the challenges, DHS created the Hybrid Composite Container to embed six-sided detection in a container made of composite material, and the Container Security Device (CSD) project to provide the capability to detect container door intrusion as an interim solution until six-sided detection is available. In November 2003, S&T issued a small business innovative research (SBIR) solicitation seeking a Marine Asset Tag Tracking System (MATTS) with the capability to provide both worldwide container tracking, and communicate the security status of the CSD and ACSD in the supply chain. Table 2 provides a description of each of the four container security technology projects, including the projects’ goals, key vendors, and time frames. S&T’s overall objective for each of these container security technology projects is the development and delivery of performance standards for the technologies to DHS’s Office of Policy Development and CBP. Performance standards define a set of requirements that must be met by products to ensure they will function as intended. Before S&T can provide performance standards to the Office of Policy Development and CBP, the capability of the technologies to meet stated requirements must be demonstrated through the successful completion of testing and evaluation activities, as described in the technology transition agreements. S&T has defined two phases of testing and evaluation for these projects: Phase I—Laboratory Testing: The purpose of Phase I is to identify capabilities and deficiencies in prototypes in a controlled environment to determine the likelihood of a prototype functioning under a variety of anticipated environmental and usage conditions. At least 10 prototypes are used for Phase I testing of a technology. Phase II—Trade Lane Testing: Phase II is designed to determine whether a prototype can enhance supply chain security while minimizing the effect on cargo operations. Phase II includes testing in an operational trade lane—the route a container travels—using 100 trips from the container packing location to arrival at a U.S. port. After successful completion of both phases of testing, S&T is to deliver performance standards—including system requirements and test plans—to the Office of Policy Development and CBP. Figure 4 shows how the testing process leads to the development of performance standards. From 2004 through 2009, S&T spent over $60 million and made varying levels of progress in the research and development of its four container security technology projects—ACSD, CSD, Hybrid Composite Container, and MATTS—to support the development of performance standards for these container security projects. Each of these projects has undergone Phase I laboratory testing, but S&T has not yet conducted Phase II trade lane testing in an operational environment to ensure that the prototypes will satisfy the requirements so that S&T can provide performance standards to the Office of Policy Development and CBP. Prior to the development of performance standards by S&T, each of the technology prototypes will need to undergo Phase II trade lane testing consistent with the operational scenarios that have been identified for potential implementation. According to S&T, the master test plans do not reflect all operational scenarios being considered because DHS is currently focused on using the technologies in the maritime environment. S&T used a multiple-round process to select vendors’ technologies for development. Several vendors responded to S&T’s 2004 broad agency announcement for the ACSD project and 2003 SBIR solicitation for MATTS. The vendors’ technology proposals were evaluated on their ability to meet the project requirements, and those technologies considered to be viable were funded by S&T to develop prototypes for test and evaluation. Because of the challenges in developing an ACSD solution, S&T created the CSD project and selected vendors for the project based on the performance of vendors’ prototypes during ACSD project testing. Similarly, selection for the Hybrid Composite Container project was based on performance in the ACSD project. From 2004 through 2009, S&T has provided a total of about $24 million in funding to vendors to develop container security technologies. Appendix I provides additional details on the vendor selection process. S&T created the Container Security Test and Evaluation (CSTE) team to develop requirements and independently monitor and evaluate the performance of container security technologies. CSTE membership is composed of three Department of Energy national laboratories— Lawrence Livermore National Laboratory, Pacific Northwest National Laboratory, and Sandia National Laboratories—and the Navy’s Space and Naval Warfare Systems Center Pacific. As described in table 3, these organizations were each selected for participation based on their areas of applicable technical expertise in fields such as sensor systems, wireless communications, and maritime environment product testing. From 2004 through 2009, S&T obligated nearly $36 million to the CSTE team to develop requirements and conduct testing and evaluation of container security technologies. One of the responsibilities of the CSTE team was to develop test plans that specify the testing activities that technologies need to successfully undergo in order to move on to later phases of testing and eventually the development of performance standards. These test plans require that technologies be evaluated on their installation and usability, functionality, performance (including under adverse environmental conditions), and vulnerability to attack by an adversary. The CSD project is expected to be completed on time, and MATTS is slightly behind schedule, as performance standards are expected to be delivered in December 2010 rather than fiscal year 2010. The ACSD project is not currently being funded due to the deficiencies identified during Phase I laboratory testing, although funding may resume if one of the vendors demonstrates progress. The Hybrid Composite Container project is undergoing contract negotiations to resume work on the composite container after challenges were encountered with the vendor. Table 4 summarizes the status and expected completion date for each of S&T’s container security technology projects. In order for these container security technologies to provide the functionality that DHS desires, they must interface with readers—both handheld and fixed in place—that can use wireless communications to send commands to or gather operational or intrusion alarm status information from the technologies for CBP’s use. Readers also serve as a means to arm and disarm ACSDs (including the sensor grid embedded in the Hybrid Composite Container) and CSDs. Because ACSDs and CSDs are mounted on the interior of a container in a manner that protects them from being physically accessed from outside of a container, a remote, wireless device such as a reader is needed to turn on the devices’ intrusion detection functionality upon sealing the container (arming the device) and to turn off the devices’ intrusion detection functionality when the container is opened by authorized parties (disarming the device). A handheld reader would also allow an official in close proximity to the container to detect and read the ACSD or CSD to determine if the container had been opened after it was sealed. In contrast, a fixed reader has a longer range and would be designed to automatically relay such status information to a centralized data center. ACSDs and CSDs must also support an encryption scheme for two reasons. First, commands to disarm a device must be encrypted to prevent unauthorized parties from circumventing the device by disarming it. Second, status information that a device sends may contain sensitive information, so status messages must be encrypted to protect the information during wireless transmission. Devices, such as handheld readers, would then be “trusted,” in that they would have the ability to handle encrypted communications with ACSDs and CSDs. Appendix II provides further information on the planned communications system supporting ACSDs and CSDs. According to S&T, because of deficiencies observed in Phase I laboratory testing, it is not currently funding the development of any vendor’s ACSD prototype beyond Phase I laboratory testing. S&T officials added that L-3 Communications (L-3) and SAIC, the two vendors selected to participate in Phase I laboratory testing, did not demonstrate enough progress meeting the requirements. According to S&T and CSTE team officials, meeting the requirements of the ACSD program, including detecting intrusion on all six sides of a container, has proven to be very challenging. According to S&T, it may resume funding for the development of the SAIC ACSD if SAIC demonstrates sufficient improvement in its CSD, which uses similar technology. If no ACSD is found to demonstrate enough progress in meeting the requirements, performance standards will not be delivered for this project. Table 5 summarizes the test results for the ACSDs. During Phase I laboratory testing, conducted from April 2008 to September 2008, the L-3 ACSD prototype successfully detected container door openings. However, it failed to identify preexisting holes in containers, was unable to consistently detect wall intrusions in ideal (empty container) conditions, and was largely unable to detect wall intrusions in a loaded container. Consequently, the L-3 ACSD prototype failed the project requirement that a device detect a hole in a container. According to S&T, based on the conclusions of the CSTE Team, in October 2008, S&T decided not to fund the L-3 ACSD for additional testing and evaluation. During Phase I laboratory testing, conducted from April 2008 to June 2008, the SAIC ACSD prototype detected door openings and closings, but it generated a false alarm rate higher than that permitted by the ACSD project requirements. Similar to the L-3 ACSD, in September 2008, the CSTE team concluded the SAIC ACSD was deficient. S&T decided that no further funding be provided to SAIC for the ACSD project. However, according to S&T officials, SAIC’s ACSD prototype is closely related to that of its CSD (see below), and therefore, if SAIC’s CSD demonstrates improvement, S&T will consider funding SAIC’s ACSD for further tests and evaluations. Performance of the two CSD prototypes varied during Phase I laboratory testing and, according to the S&T program manager, Phase II trade lane testing is expected to begin for one of the prototypes in late 2010. S&T anticipates that Phase II trade lane testing will begin for the GTRI CSD in September 2010. According to S&T officials, the SAIC CSD began another round of Phase I laboratory testing in May 2010, but testing has since ceased due to the high false alarm rate the device exhibited. The S&T program manager expects to meet a November 1, 2010, due date for completion of CSD performance standards for the Office of Policy Development and CBP. Table 6 summarizes the test results for the CSDs. While the GTRI CSD reliably and consistently detected container door openings, minor deficiencies in environmental durability and physical security were identified in the first set of Phase I laboratory testing. GTRI responded to the identified deficiencies and submitted a revised prototype for additional Phase I laboratory testing. According to the S&T program manager, S&T determined that GTRI appropriately modified its prototype to resolve the deficiencies identified in the last round of Phase I laboratory testing, and S&T plans to include this device in Phase II trade lane testing scheduled to begin in September 2010. The S&T program manager added that during Phase II trade lane testing, the CSD will be installed on containers that will travel from the Port of Shanghai, China, to Savannah, Georgia. Figure 5 shows photographs of GTRI’s and SAIC’s CSDs, which are mounted on the interior of cargo containers. The SAIC CSD reliably and consistently detected door openings, but frequent false alarms, deficiencies in the connections of electrical components, and deficiencies in the device’s installation and mounting system were identified during Phase I laboratory testing. According to SAIC, it is adjusting the detection algorithms, which is expected to reduce the device’s sensitivity to normal cargo shifting during transit in an effort to reduce the device’s false alarm rate, and it expects to simplify the installation procedure to address S&T’s concerns. According to the S&T program manager, the new version of SAIC’s CSD was delivered to S&T in May 2010 and during Phase I testing and evaluation it exhibited a high false alarm rate. According to S&T, it terminated MSC’s contract to build the composite container for the Hybrid Composite Container Project in June 2010 because MSC was experiencing internal management issues that were preventing the project from progressing. MSC had been building an ISO- compliant 20-foot shipping container made out of a composite fiber material instead of steel. The container consists of 4-foot by 8-foot corrugated, fiber-reinforced polymer panels welded to a steel frame. Five of the panels are welded together to form a 20-foot container wall. The container is 15 percent lighter than a steel container of the same size, and according to an official at the University of Maine (a subcontractor to MSC), it is expected to exhibit three to five times greater resistance to corrosion than a steel container. Damaged panels must be replaced, however, rather than repaired with a patch as can be done on a steel container. The container incorporates an embedded sensor grid to provide six-sided intrusion detection. In addition to the sensor grid, the composite container is to use a CSD for door-opening detection. Finally, a communications chip is integrated into the sensor grid to allow for wireless communications with readers. Previous test results of the composite container indicate that the container would likely meet or exceed ISO standards and, therefore, be suitable for use in international trade. S&T selected GTRI to develop a sensor grid that could be embedded within the walls of the composite container to provide intrusion detection capability. The sensor grid provides ACSD-like security for the container in that a hole in the container wall would be detected by the sensor grid triggering an alarm. However, one of the composite panels with the embedded sensor grid failed durability testing conducted by the vendor. Although development of the composite container has been halted, S&T has directed GTRI to continue developing its sensor grid to address this deficiency because S&T is exploring other contracting options to continue the development of the composite container. According to S&T, it anticipates that work on the composite container will resume in September 2010. One vendor, iControl, Inc., is currently being supported by S&T to develop MATTS, which includes the iTAG, a communications tag mounted on the exterior of containers, and the iGATE, a remote reader used to communicate with the iTAG. MATTS will participate in Phase II trade lane testing with the GTRI CSD in September 2010. MATTS provides the capability to globally track the location of containers. In addition, the MATTS iTag provides a long-range wireless communications system for CSD and ACSD devices. A CSD or ACSD device mounted on the interior of a container has a short-range wireless communications system, but the iTAG, when mounted outside of a container, can act as a relay to pass messages from the CSD or ACSD to centralized locations at a designated read point, such as a port of departure. The CSTE team conducted limited Phase I laboratory testing of the iTAG, but it did not conduct all needed laboratory testing because changes were still being made to the iTAG. According to the S&T program manager, the iTAG will undergo all required testing when it is produced in its final form. While MATTS has not undergone DHS’s Phase II trade lane tests, iControl, Inc., conducted two trade lane tests of MATTS beginning in 2007 and 2008. During each of these trade lane tests, iControl, Inc., placed 100 iTAGs on 100 cargo containers and shipped them from the Port of Yokohama, Japan, to the Port of Los Angeles. At the conclusion of these tests, 199 of the 200 MATTS iTAGs arrived at their destinations. However, the trade lane testing identified deficiencies with iControl, Inc.’s MATTS iTAG. Specifically, 13 to 15 percent of the iTAGs sustained damage during the tests, including loose connectors that affected the performance of the MATTS tags. In one test, power management features did not function as intended, resulting in battery usage in excess of that allowed by the project requirements. During the trade lane tests, iControl, Inc., did not test MATTS in conjunction with any ACSD or CSD prototypes. However, iControl, Inc., did test the environmental durability of the iTAG, as well as its power management and container tracking capabilities. According to the S&T program manager, the deficiencies identified in MATTS are being addressed by iControl, Inc., and a new version of the iTAG, in conjunction with the GTRI CSD device, will undergo Phase II trade lane testing from the Port of Shanghai, China, to Savannah, Georgia, in September 2010. The S&T program manager anticipates providing MATTS performance standards to the Office of Policy Development and CBP in December 2010. Figure 6 shows the MATTS tag mounted on a cargo container. Before S&T can provide container security technology performance standards to the Office of Policy Development and CBP, all technology prototypes have to undergo Phase II trade lane testing, according to the master test plans. According to S&T, the MATTS tag and GTRI’s CSD are expected to undergo Phase II trade lane testing in September 2010. However, S&T’s plans for conducting Phase II trade lane testing of these container security technologies do not reflect all the operational scenarios agreed upon within DHS for how the technologies could be implemented. S&T’s master test plans define Phase II trade lane testing as 100 maritime moves to a U.S. port. However, some of the operational scenarios being considered for implementation by the Office of Policy Development and CBP involve using technologies on cargo containers that would either not be placed on a vessel, or only applied during overland shipping after their arrival in the United States. Before S&T can provide performance standards, per the technology transition agreements signed by S&T, the Office of Policy Development, and CBP, the technologies are to have been proven to work in their final form and under expected operational conditions. DHS acknowledged that the testing is limited and that future testing should reflect all the operational scenarios. Unless the container security technologies are tested in all operational scenarios, the performance standards that are delivered by S&T to the Office of Policy Development and CBP may not fully meet DHS’s or CBP’s needs. Our prior work has shown that when operational requirements are not established prior to acquisition, it can negatively affect program performance.Conducting Phase II trade lane testing for the container security technologies consistent with all operational scenarios would better position S&T to determine if the technologies will be suitable for use in their intended operational environments. If S&T determines that the container security technologies are mature enough to provide performance standards for these technologies to the Office of Policy Development and CBP, key steps and associated challenges remain before DHS and CBP can implement the container security technologies in the supply chain that meet those performance standards. Based on our discussions with Office of Policy Development and CBP officials, we identified three key steps that remain before implementation can occur: (1) obtaining support from trade industry and international partners, (2) developing a concept of operations (CONOPS) that describes how the technologies are to be deployed, and (3) certifying the technologies for use in the supply chain. According to Office of Policy Development and CBP officials, they will take these steps if and when S&T is able to provide performance standards. Our work indicates that the Office of Policy Development and CBP could face challenges when executing some of these steps. DHS could face challenges in obtaining support from the trade industry and international partners as it pursues implementation of the container security technologies. According to an Office of Policy Development director, there are two approaches DHS could likely pursue to implement container security technologies—mandatory or voluntary participation by the trade industry. The director added that if DHS determines that the universal use of container technologies would provide a worthwhile security benefit, DHS would likely pursue a rulemaking approach to mandate the use of the technologies on all U.S.-bound containers. If DHS determines that the technologies would be primarily beneficial in a more limited portion of the supply chain, though, it would work with the trade industry to encourage voluntary use of the technologies. Some members of the trade industry we spoke with were resistant to purchasing and using the technologies given the number of container security programs they already have to comply with. Representatives of the World Shipping Council and both vessel carriers we spoke with questioned the role of vessel carriers in implementation because of the uncertainties that presently exist concerning how the technologies could be implemented and which parties are to be involved. The representatives of the two vessel carriers we spoke with expressed interest in purchasing the Hybrid Composite Container because of the commercial benefit that could be provided by its reduced weight, but they added that they are not interested in spending additional money on the embedded sensor grid that is to provide the security benefit. Further, the importers we spoke with questioned their role and whether they have the authority to affix technologies on containers they do not own, as the containers they use are typically leased. If CBP adopts a voluntary approach, it may also have challenges getting support from C-TPAT members—its trusted private sector partners. Container security technologies could provide security benefits in the supply chain, but using technology that detects intrusion into a cargo container when there is no assurance illicit materials or contraband were not earlier introduced could give the false impression that the container is secure or could have the effect of potentially locking dangerous or illicit cargo in a container. Since C-TPAT members are committed to a comprehensive security process, including procedures for securing containers at the point of packing, they provide such assurance. According to DHS’s 2007 Strategy to Enhance International Supply Chain Security, the department intended to use C-TPAT Tier III members to implement commercially available container security devices that CBP previously tested. However, C-TPAT Tier III members we spoke with were resistant to the idea of having to purchase and use technologies, such as the CSD and ACSD, on their containers to maintain their Tier III status. In particular, some of the members stated that from a financial standpoint, the additional benefit of reduced number of container inspections that CBP provided to Tier III members over Tier II members, would not outweigh the costs of using the technologies. As a result, they stated that they would likely downgrade to Tier II status rather than have to purchase the technologies. The C-TPAT Tier III members, as well as other trade industry representatives we spoke with, said DHS should demonstrate, through a risk-benefit analysis, that using the technologies would provide a clear security benefit before making the use of such technologies a requirement. CBP officials told us that they are aware that the trade industry is generally not willing to spend money on container security technologies and that C-TPAT members question whether the cost is worth the benefit. In addition to obtaining trade industry support, DHS will also need to obtain support from international organizations and WCO to implement the new container security technologies. In order for the container security technologies to be admitted into foreign countries without being subject to import duties and taxes, as well as import prohibitions and restrictions, the technologies first have to be recognized as accessories and equipment of the containers under the Customs Convention on Containers. The convention essentially provides for the temporary admission and reexportation of containers and their accessories and equipment that meet certain requirements without the imposition of duties or taxes by any customs authority. According to a WCO director, while an individual device attached to a container most likely would be viewed as an accessory to the container, if multiple devices are shipped in bulk for reuse on other containers, the question of how to treat them for import duty purposes would be more difficult. He also noted that, if requested by a member country, WCO could provide an advisory opinion as to whether the technologies should be treated as container accessories and equipment pursuant to the Customs Convention on Containers, but the ultimate decision as to whether to classify the technologies as exempt from import duties and taxes resides with each individual foreign government. Other options under consideration for how the container security technologies are to be implemented would also require support from foreign governments. CPB officials told us that they are considering implementing the use of container security technologies in high-risk trade lanes—trade routes that have been determined to pose the highest risk of transporting threats to the United States. S&T officials stated that another option would be to use the technologies on cargo containers departing from ports participating in the Container Security Initiative. CBP officials recognize that they will need to work with international partners, and plan to do so when S&T provides performance standards. The successful implementation of container security technologies depends on the security procedures throughout the supply chain as well as the people engaged in those procedures. These procedures are typically documented in a concept of operations (CONOPS)-—a user-oriented document that describes how an asset is to be employed and supported from the users’ viewpoint. A CONOPS also describes the operations that must be performed, who must perform them, and where and how the operations will be carried out. DHS and CBP could face challenges developing a feasible CONOPS that addresses the necessary technology infrastructure needs and protocols. Container security technologies require a supporting technology infrastructure, including readers to communicate to customs officials whether a technology has identified an unauthorized intrusion, and a means to capture and store the data. CBP will be faced with determining who will have access to the container security technologies through readers, where to place these readers, and obtaining permission to install fixed readers at both domestic and foreign ports. Prior work we conducted on container scanning technologies identified challenges in obtaining permission and space from terminal operators at both domestic and foreign ports to install equipment. Further, several pilots previously conducted to test the feasibility of using container security technologies have also noted challenges with establishing the reader infrastructure at ports. For example, during Operation Safe Commerce, difficulties were encountered with the installation and maintenance of fixed readers at both foreign and domestic ports. Furthermore, several foreign ports did not allow installation of the fixed readers, and problems were also encountered in installing and maintaining power to fixed readers at domestic port facilities. In addition, databases are needed to collect the data obtained by the readers from the container security technologies. Pilots have also demonstrated the challenges with establishing information systems to collect the data provided by the technologies. Establishing protocols regarding which supply chain participants will be involved in arming and disarming the technologies, reading the status messages generated by the technologies, responding to alarms, and accessing data will also be important. For example, if the CONOPS calls for technologies to first be affixed to a container at the point of packing, it will require the packers to have the ability to first install and arm the technologies. The packing of goods into cargo containers can be handled by a number of different parties, including the shipper (i.e., seller), a third- party consolidator, or the buyer. Regardless of which party is packing the container, these participants have the last visual check of the goods before they are sealed for transport. At any point during the transfer of the container from its packing point to the port of embarkation, foreign customs may need to stop and open a container for inspection. In these instances, it will be important to ensure foreign customs officials have the ability to arm and disarm the technologies so they can open a container without triggering the alarm. Response protocols will need to be developed that include information on which parties are to respond to an alarm and the associated processes for responding. While CBP would likely respond to a container alarm by first scanning the container with NII equipment to mitigate any potential danger to a CBP officer entering the container to conduct a physical examination, CBP officers may not be nearby when an alarm occurs, particularly if it occurs during a container’s transport to a foreign port, at a non-Container Security Initiative port, or while on a vessel in-transit. Furthermore, CBP will also need to consider whether foreign governments’ customs agencies will be allowed access to the data generated by the technologies on containers departing their respective ports. Once a CONOPS is developed, certification testing can take place to determine the suitability of technologies consistent with the CONOPS. According to CBP officials, CBP plans to conduct certification testing to demonstrate whether technology products meet the performance standards issued by S&T and are suitable for implementation consistent with its operational concept. CBP officials stated they would begin the certification process by issuing a request for information seeking vendors to submit technologies for certification testing. Interested container security technology vendors would submit their products to CBP for certification testing, which consists of a mix of laboratory and trade lane testing to demonstrate whether the products meet the performance standards. According to CBP officials, they would determine a means to select vendor products for testing and then establish detailed methods to test and evaluate the technology products submitted by the vendors. Office of Policy Development and CBP officials we spoke with anticipate certification testing would take approximately 3 to 4 months. The officials added that in advance of the testing, preparation time is needed to solicit participants from the trade industry and select trade lanes for testing. After conducting the tests, additional time will be needed to analyze the results to determine if the vendor’s technology product will function as intended in the supply chain. If a technology product successfully completes certification testing, DHS will certify it as meeting its standards and the trade industry would be able to purchase it for use in the supply chain. Technologies that are successful during certification testing are expected to be implemented in the supply chain, according to an Office of Policy Development director. Figure 7 shows the process of developing an approved products list. Container security technologies have the potential to contribute to CBP’s layered security strategy by tracking containers, and detecting and reporting intrusions, while containers move through the supply chain. S&T has made progress in testing and evaluating certain container security technologies, and continues to work with vendors to develop these technologies, but challenges continue in finding technologies that can provide intrusion detection through any of the six sides of a container. The ACSD project is not currently being funded due to the deficiencies identified during Phase I laboratory testing and the Hybrid Composite Container project is undergoing contract negotiations to resume work on the composite container after challenges were encountered with the vendor. In contrast, the CSD and MATTS projects—which will provide intrusion detection through container doors and a communications system, respectively—are nearing their completion and S&T expects to deliver performance standards to the Office of Policy Development and CBP by the end of 2010. Before delivering the performance standards, S&T must demonstrate that these container security technologies can work in the operational environments in which they are intended to be used. However, the operational environment testing that S&T plans to conduct is limited to the maritime environment and does not fully address the operational scenarios being considered by the Office of Policy Development and CBP. Until all intended operational scenarios are tested, S&T cannot provide reasonable assurance that the container security technologies would effectively function in all the operational scenarios identified by the Office of Policy Development and CBP for potential implementation. Conducting Phase II trade lane testing for the container security technologies in all intended operational scenarios would better position S&T to determine if the technologies will be suitable for use in their intended operational environments. To ensure that the container security technologies being developed will function in their intended operational environments, we recommend that the Secretary of Homeland Security instruct the Assistant Secretary of the Office of Policy, the Commissioner of U.S. Customs and Border Protection, and the Under Secretary of the Science and Technology Directorate, to test and evaluate the container security technologies consistent with all of the operational scenarios DHS identified for potential implementation, before S&T provides performance standards to the Office of Policy Development and CBP. We provided draft copies of this report to the Secretaries of Homeland Security, Energy, and Defense for review and comments. DOE and DOD did not provide official written comments to include in the report. DHS provided official written comments, which are reprinted in appendix III. DHS concurred with our recommendation. In addition, DHS and CBP provided technical comments, which we incorporated as appropriate. In response to DHS’s technical comments and subsequent discussion with agency officials, we modified our recommendation to clarify its intent that DHS test and evaluate container security technologies consistent with all of the operational scenarios it has identified for potential implementation. We are sending copies of this report to the Secretaries of Homeland Security, Energy, and Defense; and interested congressional committees. In addition, the report will be available on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact Stephen L. Caldwell at (202) 512-9610 or Timothy M. Persons at (202) 512-6412, or by e-mail at caldwells@gao.gov or personst@gao.gov, respectively. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. This appendix provides information on how the Department of Homeland Security’s (DHS) Science and Technology (S&T) Directorate selected vendors to participate in the four container security technology projects. S&T used a multiple-round process to select vendors’ technologies for development. Several vendors responded to S&T’s 2004 broad agency announcement (BAA) for the Advanced Container Security Device (ACSD) project and 2003 small business innovative research (SBIR) solicitation for the Marine Asset Tag Tracking System (MATTS). Respondents’ technology proposals were evaluated on their ability to meet the project requirements, and those considered to be viable were selected by S&T to participate in Round I. S&T selected vendors for subsequent rounds of development based on vendor performance and proposals. Vendor selection for the Container Security Device (CSD) project was based on the performance of prototypes during Round I of the ACSD project. Similarly, selection for the Hybrid Composite Container project was based on performance in the ACSD project. Table 7 provides information on the vendors selected to participate in each of the projects and the funds provided to the vendors. Appendix II provides information on the communications system used to support container security technologies. Because ACSDs (including the sensor grid embedded in the Hybrid Composite Container) and CSDs are mounted inside of a container without a physical connection accessible from the outside of a closed container, a wireless communications system is to facilitate the remote arming (activating the intrusion detection capabilities) and disarming (deactivating the intrusion detection) of the ACSDs or CSDs. Furthermore, the communications system is to allow U.S. Customs and Border Protection (CBP) remote access to status information from an ACSD or CSD, including information about the health of the device and whether the device had detected an intrusion. ACSDs and CSDs are intended to be a single component of a larger Security Device System, which may also include the following components (see fig. 8): Communications Modules (CM): These devices are mounted on the exterior of a container. A CM is to relay status information from an ACSD or CSD to a fixed status reader using radio frequency (RF) at 2.4 GHz or cellular communications. iControl, Inc. is developing a device known as the iTAG under the MATTS project to serve as a CM. Fixed status readers: These devices are to receive status information from ACSDs or CSDs located within 100 feet of the reader (or status updates relayed by a CM) and relay that status information using a variety of methods, such as RF, cellular, or Ethernet access, to a centralized data center. iControl, Inc., is developing a device known as the iGATE under the MATTS project to serve as a fixed status reader. Handheld readers: These are to be used by CBP or other authorized parties to receive status information from ACSDs or CSDs located within 10 feet of the reader. Centralized data centers: These centers are to receive status information from CMs and readers and allow CBP or other authorized parties to remotely monitor status information from all ACSDs and CSDs in the area served by the data center. ACSDs and CSDs should be able to communicate to a reader with or without the use of a CM. If no CM is mounted with an ACSD or CSD, the ACSD or CSD can communicate—by means of short-range RF at 2.4 GHz using communications capabilities on the ACSD or CSD itself—intrusion alerts and periodic general status updates to a fixed status reader located within 100 feet of the monitored container or to a handheld reader located within 10 feet of the monitored container. If a CM is associated with an ACSD or CSD, the ACSD or CSD can use short-range RF communications to relay messages through its CM to a more remote reader. If an ACSD or CSD needs to send status information to the data center while out of range of a reader, the external CM can attempt to relay the information through other CMs mounted on nearby containers until a reader is in range. This relayed communications process is known as “meshing.” Similarly, if a reader is unable to communicate to the data center, it may attempt to pass messages to other nearby readers until communication with the data center is achieved. Secure data generated by the ACSDs and CSDs are to be protected by translating the data into an unreadable form using a code (encryption). This encryption is to occur directly on the ACSDs and CSDs to avoid possible interception of confidential information transmitted during normal operation. Transmitted information includes security-related information used by CBP to determine the status of a container, but it may also include proprietary shipping information used by carriers or shippers (although such information must be encrypted separately). The encryption scheme also allows remote disarming of the devices (arming need not be done with an encrypted command), as only those devices with the encryption key will be capable of sending commands that the ACSDs or CSDs will recognize. The ACSDs, CSDs, handheld readers, and data centers (but not the fixed readers, as they are unattended and insecure) will be provided with the encryption key, allowing these components of the Security Device System to exchange information in a secure manner. Communication of status information to remote readers for transfer to a data center is to occur, at minimum, at all points where reading is specified by DHS. These read points include the point of packing, the entrance gate at the port of departure, the exit gate at the port of arrival, and the entrance gate at the point of deconsolidation (where a container is unpacked). Communications should be designed in a nonproprietary format designed specifically for this application. This standard ensures that a Security Device System is permissible under all necessary international communications standards. In addition to the contacts named above, Christopher Conrad and Richard Hung, Assistant Directors, and Lisa Canini, Analyst-in-Charge, managed this review. Leah Anderson, Alana Finley, Scott Fletcher, Adam Mirvis, and Matthew Tabbert made significant contributions to the work. In addition, Stanley Kostyla assisted with design and methodology; Frances Cook provided legal support; Katherine Davis and Lara Miklozek provided assistance in report preparation; and Pille Anvelt and Avy Ashery helped develop the report’s graphics. The terms below are defined for the purposes of this GAO report. The freight (goods or products) carried by a vessel, barge, train, truck, or plane. Concept of Operations (CONOPS) A CONOPS is a user-oriented document that describes how an asset, system, or capability will be employed and supported from the users’ viewpoint. A CONOPS also describes the operations that must be performed, who must perform them, and where and how the operations will be carried out. The party who packs the container or arranges for the packing of the container. A box made of aluminum, steel, or fiberglass used to transport cargo by ship, rail, truck, or barge. Common dimensions are about 20 feet x 8 feet x 8 feet (called a TEU, or 20-foot-equivalent unit) or about 40 feet x 8 feet x 8 feet. Government agency charged with enforcing the laws and rules passed to enforce the country’s import and export revenues. In the United States these responsibilities are handled by U.S. Customs and Border Protection. The person who prepares the needed documentation for importing goods (just as a freight forwarder does for exports). In the United States, the broker is licensed under federal regulations to act on behalf of others in conducting transactions related to federal import and export requirements. A person or company that is responsible for the sending of goods out of one country to another. An individual or company that prepares the documentation and coordinates the movement and storage of export cargoes. See also customs broker. A person or company that brings in goods from a foreign country. A one-way trip through the supply chain from stuffing to U.S. port of arrival on an ocean-going vessel. Using technologies to scan the contents of a container without opening the container. A non-vessel operating common carrier buys space aboard a ship to get a lower volume rate and then sells that space to various small shippers, consolidates their freight, issues bills of lading, and books space aboard a ship. Requirements that must be met by products to ensure they will function as intended. The opening of a container and removal of its contents for inspection. The likelihood that a device will properly alarm when in the armed mode. The likelihood that a device will improperly alarm, when in the armed mode, due to environmental conditions or conditions other than opening or removing the door(s). A functional preproduction version of a new type of product. Red teaming is performed from the perspective of an attacker with malevolent intentions, to identify and exploit weaknesses in a technology. The results of these tests allow for a better understanding of the risk associated with the corresponding device or system. Nonintrusively inspecting the contents of a container using technologies. Assessing the security risk posed by a container based on available information. The person or company that is usually the supplier or owner of commodities shipped. The international network of retailers, distributors, transporters, storage facilities and suppliers that participate in the sale, delivery, and production of goods. A sea route ordinarily used by vessels. Twenty-Foot Equivalent Unit (TEU) A unit of measurement equal to the space occupied by a standard 20-foot container. Used in stating the capacity of container vessel or storage area. One 40-foot container is equal to 2 TEUs. An entity that develops container security technology prototypes. A ship or large boat. Any person or entity who, in a contract of carriage, undertakes to perform or to procure the performance of carriage by sea. Includes, among other things, a list of cargo being carried by the vessel. Maritime Security: DHS Progress and Challenges in Key Areas of Port Security. GAO-10-940T. Washington, D.C.: July 21, 2010. Combating Nuclear Smuggling: DHS Has Made Some Progress but Not Yet Completed a Strategic Plan for Its Global Nuclear Detection Efforts or Closed Identified Gaps. GAO-10-883T. Washington, D.C.: June 30, 2010. Supply Chain Security: Feasibility and Cost-Benefit Analysis Would Assist DHS and Congress in Assessing and Implementing the Requirement to Scan 100 Percent of U.S.-Bound Containers. GAO-10-12. Washington, D.C.: October 30, 2009. Combating Nuclear Smuggling: DHS Improved Testing of Advanced Radiation Detection Portal Monitors, but Preliminary Results Show Limits of the New Technology. GAO-09-655. Washington, D.C.: May 21, 2009. Combating Nuclear Smuggling: DHS’s Phase 3 Test Report on Advanced Portal Monitors Does Not Fully Disclose the Limitations of the Test Results. GAO-08-979. Washington, D.C.: September 20, 2008. Supply Chain Security: CBP Works with International Entities to Promote Global Customs Security Standards and Initiatives, but Challenges Remain. GAO-08-538. Washington, D.C.: August 15, 2008 Supply Chain Security: Challenges to Scanning 100 Percent of U.S.- Bound Cargo Containers. GAO-08-533T. Washington, D.C.: June 12, 2008. Supply Chain Security: Examinations of High-Risk Cargo at Foreign Seaports Have Increased, but Improved Data Collection and Performance Measures Are Needed. GAO-08-187. Washington, D.C.: January 25, 2008. Maritime Security: The SAFE Port Act: Status and Implementation One Year Later. GAO-08-126T. Washington, D.C.: October 30, 2007. Maritime Security: One Year Later: A Progress Report on the SAFE Port Act. GAO-08-171T. Washington, D.C.: October 16, 2007. Maritime Security: The SAFE Port Act and Efforts to Secure Our Nation’s Seaports. GAO-08-86T. Washington, D.C.: October 4, 2007. International Trade: Persistent Weaknesses in the In-Bond Cargo System Impede Customs and Border Protection’s Ability to Address Revenue, Trade, and Security Concerns. GAO-07-561. Washington, D.C.: April 17, 2007. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Homeland Security: Key Cargo Security Programs Can Be Improved. GAO-05-466T. Washington, D.C.: May 26, 2005. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005.
Cargo containers could be used to transport unlawful cargo, including weapons of mass destruction, illicit arms, stowaways, and illegal narcotics into the United States. Within the Department of Homeland Security (DHS), U.S. Customs and Border Protection (CBP) is responsible for container security. To enhance container security, CBP has partnered with DHS's Science and Technology (S&T) Directorate to develop performance standards--requirements that must be met by products to ensure they will function as intended--for container security technologies. After successful completion of testing, S&T plans to deliver performance standards to DHS's Office of Policy Development and CBP. As requested, this report addresses (1) the extent to which DHS has made progress in conducting research and development and defining performance standards for the technologies, and (2) the remaining steps and challenges, if any, DHS could face in implementing the technologies. GAO, among other things, reviewed master test plans for S&T's four ongoing container security technology projects, and interviewed DHS officials. DHS has conducted research and development for four container security technology projects, but has not yet developed performance standards for them. From 2004 through 2009, S&T spent approximately $60 million and made varying levels of progress in the research and development of its four container security technology projects. These projects include the Advanced Container Security Device (ACSD), to detect intrusion on all six sides of a container; the Container Security Device (CSD), to detect the opening or removal of container doors; the Hybrid Composite Container, a lightweight container with an embedded sensor grid to detect intrusion on all six sides of the container; and the Marine Asset Tag Tracking System (MATTS), to track containers. The ACSD and Hybrid Composite Container technologies have not yet completed laboratory testing, but the CSD and MATTS are proceeding to testing in an operational environment, which will determine if the technologies can operate in the global supply chain--the flow of goods from manufacturers to retailers. S&T's master plans for conducting operational environment testing, however, do not reflect all of the operational scenarios the Office of Policy Development and CBP are considering for implementation. According to DHS guidance, before S&T can provide performance standards to the Office of Policy Development and CBP, the technologies are to have been proven to work in their final form and under expected operational conditions. Until the container security technologies are tested and evaluated consistent with all of the operational scenarios DHS identified for potential implementation, S&T cannot provide reasonable assurance that the technologies will effectively function as the Office of Policy Development and CBP intend to implement them. If S&T determines that the container security technologies are mature enough to provide performance standards for these technologies to the Office of Policy Development and CBP, key steps and challenges remain before implementation can occur. These key steps involve (1) obtaining support from the trade industry and international partners, (2) developing a concept of operations (CONOPS) detailing how the technologies are to be deployed, and (3) certifying the technologies for use. The Office of Policy Development and CBP plan to take these steps if and when S&T provides performance standards. GAO recommends that DHS test and evaluate the container security technologies consistent with all the operational scenarios DHS identified for potential implementation. DHS concurred with our recommendation.
The Department of Defense’s bottom-up review concluded that the Army’s reserve components should be reduced to 575,000 positions by 1999—a 201,000 decrease since fiscal year 1989. A group of senior officers of the Army, its reserve components, and organizations that represent Army component issues was tasked with providing a recommendation to the Secretary of the Army on the allocation of the 575,000 positions between the Guard and Reserve. The group, through the Offsite Agreement, allocated the positions as follows: 367,000 positions to the Army National Guard and 208,000 to the Army Reserve. The agreement also included a realignment of functions between the Guard and Reserve. This is to be accomplished through three separate approaches—swap, migration, and reallocation. The swap involves about 10,000 authorized positions in each reserve component. The Guard agreed to inactivate 128 combat support and combat service support units such as medical, military police, and transportation units and transfer about 10,000 authorized positions associated with these units to the Reserve. The Reserve agreed to inactivate 28 units, including most of its remaining combat units and its last remaining special forces units, and transfer about 10,000 positions associated with these units to the Guard. According to the Army, the swap will more clearly concentrate combat support and combat service support functions in the Reserve and combat functions in the Guard. The migration involves the transfer of about 4,300 authorized positions and over 250 helicopters from the Reserve to the Guard. The Reserve agreed to nearly deplete its helicopter resources by inactivating 11 utility helicopter aviation and aviation maintenance units and 15 medical air ambulance units. According to Guard officials, the migration and other initiatives will provide enough helicopters for the Guard to cover the needs of each state. Without the migration, this objective would have been jeopardized because the Guard is scheduled to lose helicopters as part of the Army’s general downsizing. The reallocation allows the Guard to keep about 7,700 authorized positions for engineer and military police units that otherwise would have been inactivated. According to a Guard official, this will enable the Guard to better support its state missions. Other units were eliminated so the positions could be reallocated within the Guard. The reallocation does not affect the Reserve, nor does it affect the personnel end strength of the Guard. The Army and its reserve components considered several factors in calculating the cost to implement the agreement. The factors include the percentage of personnel who would separate from military service and receive benefits, the number of facilities that would have to close, and the amount of goods and equipment that would have to be moved. In March 1994, the Assistant Secretary of Defense for Reserve Affairs and the Vice Chief of Staff of the Army testified that the short-term cost to implement the agreement was less than $100 million. According to Army officials, this was a rough estimate because the Army could not be certain how many military persons would transfer or leave. Also, the Army could not determine the actual cost of closing facilities and transporting goods until the reserve components identified which units would be affected. However, when we began our audit work in June 1994, the Army estimated the total cost of implementing the agreement at about $38 million from fiscal year 1995 to fiscal year 1999. In response to our audit questions, the Army revised some of its estimates and, on the basis of these revisions, increased it to about $85 million. For example, Army officials projected that transition benefits for Reservists whose units will deactivate would probably be greater than originally estimated. It also estimated that the Reserve will need more funds for training and construction of facilities and that the operations costs for units involved in the swap would be more than anticipated. However, we believe that this revised estimate is understated by about $100 million because it excludes training costs that are related to the agreement and includes savings that are not a result of the agreement. In table 1, we compare our estimate with the Army’s initial and revised estimates. We accepted the Army’s revised estimates for transition benefits, transportation of equipment, and costs of facilities because we had no basis to question their reasonableness. However, we found that the revised estimate excluded training costs that the Guard will likely incur and included savings in aircraft operating costs that resulted from another initiative. The Guard will receive the missions of five Reserve assault helicopter battalions that were being modernized with Blackhawk helicopters. The Reserve had trained the equivalent of 3-1/2 battalions for the Blackhawk systems. The Guard did not include in its estimates the cost to train a like amount of personnel. We estimate this training cost to be about $14 million because the Guard units that will take over the Blackhawk missions have only a few Blackhawk trained personnel. Also, most Blackhawk qualified Reserve personnel may not join the Guard. The Guard will also have to train the remaining 1-1/2 battalions, but we do not consider this a cost of the agreement because it is an expense that the Reserve would have had if it were not for the agreement. The Army estimated that the Guard will avoid about $82.5 million in operating expenses by turning in excess nonmodern aircraft once the Blackhawks arrive. We believe the savings should not be attributed to the agreement because these aircraft have been programmed for disposal for several years. Consequently, we deleted the $82.5 million savings from the operations cost category, leaving an anticipated savings of $13.7 million. According to Guard officials, the $13.7 million savings to the federal government is that part of the Guard’s operating costs that is paid by state funds. The Department’s current system for reporting readiness to the Joint Chiefs of Staff is the Status of Resources and Training System. This system measures the extent to which individual service units possess the required resources and training to undertake their wartime missions. The system compares the current status of specific elements considered essential to unit readiness—personnel and equipment on hand, equipment condition, and the training of operating forces—with those needed to undertake wartime missions. We compared the readiness levels (as of April 1994) of the inactivating units with the readiness levels of the units assuming the missions of the inactivating units. Table 2 shows the results of that comparison. We could not estimate the agreement’s impact on readiness for 152 of the 182 units affected by the swap and migration. However, we estimated the readiness impact for some units. Thirteen units will be replaced by units with lower readiness ratings, while 17 units will be replaced by units having the same or higher readiness ratings. We do not have estimates for the agreement’s readiness impact on 152 units because new units are being created or individual units have not been designated to replace inactivating units. For example, we cannot identify the readiness impact for 20 of the 28 Reserve to Guard transfers involved in the swap because the Guard did not designate specific units that will assume the missions of the 20 Reserve units. In all but one of the 108 Guard to Reserve transfers, we could not estimate the readiness impact because they involved the establishment of new Reserve units. The 107 new Reserve units have up to 1 year to organize and build up their readiness ratings before the Guard units are inactivated. During this year, the Reserve units’ readiness ratings can be expected to improve as the units obtain personnel and equipment and train their personnel, while the Guard units’ ratings can be expected to decrease as these units lose personnel and equipment. Hence, the impact on readiness could vary over time. For some units, this time could be very short. For example, 37 Reserve units are being established within 50 miles of existing inactivating Guard units to utilize Guard personnel, equipment, and facilities. We were told that in some of these cases, Guard units will convert to Reserve units. In 13 instances, some degradation in readiness may occur. For example, two Guard units that will take on the missions of Reserve Blackhawk helicopter units do not have enough Blackhawk helicopters or trained personnel to satisfy unit requirements. Reserve unit personnel told us that it may take 3 to 5 years before these Guard units reach the readiness level of the Reserve units that are deactivating. According to Army officials, the Army plans to convert these units within a 3-year period, and they anticipate, on the basis of National Guard historical data, that unit readiness will not be degraded longer than 1 year during the conversion. In the other 11 instances, Guard units had higher overall readiness ratings than existing Reserve units taking on their missions. In 17 instances, we noted either little impact on readiness or an improvement in readiness. For example, nine inactivating Guard units had the same or higher overall readiness ratings as existing Reserve units taking on their missions. Similarly, six of the eight Guard field artillery and armor units taking on the missions of Reserve units had higher overall readiness ratings than the inactivating Reserve units. Contingency force pool units support a crisis response force, serve as follow-on forces, or serve as forces in a separate contingency. It is important for these units to maintain a high state of readiness because these units often deploy to military conflicts early—sometimes even before some active units. Fifty-eight inactivating Guard units in the swap and seven inactivating Reserve units in the aviation migration portion of the agreement had contingency force pool designations. These designations did not always transfer to the units that assumed the missions of the inactivating units. We found that the agreement’s impact on readiness varied on a unit-by-unit basis. Most of the Guard’s contingency force pool designations transferred to the Reserve as of November 22, 1994. For 44 of the 58 units, existing Reserve units assumed the contingency force pool assignments previously assigned to the Guard, while for 14 units, new Reserve units will take on the assignment. We found that 29 of the 44 Guard units had higher overall readiness ratings than the Reserve units taking on the contingency force pool assignment. For the remaining 15 units, the overall readiness ratings for the Reserve units are equal to or higher than those of the Guard units. We could not ascertain the impact on readiness for the 14 new Reserve units. The Reserve aviation units’ contingency force pool designations transferred to the Guard as of November 22, 1994. In six of the seven cases, the Guard units had the same or higher overall readiness ratings as the Reserve units they are replacing. In the remaining case, the Guard unit had a lower readiness rating. Most of the Reserve troops facing inactivation will be released during fiscal year 1995, while most of the affected Guard troops will not be inactivated until subsequent years. Table 3 shows the number of units and authorized positions that will be affected in fiscal years 1994-95 and 1996-97. Because we cannot anticipate what future actions the reserve components will take to accommodate displaced personnel, we focused our attention in three primary areas affected in fiscal years 1994 and 1995 by the agreement—the 157th Separate Infantry Brigade, aviation units, and special operations units. These account for 23 of the 40 units and about 6,900 of 9,600 authorized personnel. We found that the Army Reserve Command is helping inactivated soldiers find new positions in other Reserve units but is not helping them switch to the Guard even though the available Guard positions are more consistent with their occupational skills and offer greater longevity. For example, the Reserve Command in eastern Pennsylvania has offered assignments to nearly all the troops in the 157th Separate Infantry Brigade. These positions generally are in other Reserve units within a 50-mile range of the soldiers’ homes. These include several new units in eastern Pennsylvania established to accommodate troops from the 157th. But many of the offers will be for overstrength positions that can only be held for 1 year, and few will be for assignments in the soldiers’ current occupational skills. According to reserve officials, they expect few permanent positions to become available to senior officers and enlisted personnel. Reserve and Guard officials told us that many soldiers in the 157th would rather switch from the Reserve to the Guard because they are combat soldiers and the Guard is the only reserve component with combat units. We were also told that the Reserve will release some troops to the Guard but is doing several things that will make switching unattractive. For example, soldiers transferring to Reserve positions and requiring new occupational skills will immediately begin training for the new positions, while soldiers who elect to join the Guard will be used to close out the Brigade and will not be released until the inactivation date for the 157th, which is scheduled for September 1995. Pennsylvania Army National Guard officials told us that, except for senior officers and enlisted persons, they would welcome the transfer of troops from the 157th. To make the transfer to the Guard more attractive, the Guard recently announced that it would honor most Reserve bonus contracts and student loan repayment plans. Most Reserve helicopter pilots, technicians, and civilians associated with aviation units will have difficulty finding new units in the Guard. The Guard already has personnel for most of these positions, except for the Blackhawk units where the Guard has few qualified Blackhawk personnel. However, even for these units, we do not anticipate that many Reserve aviators will transfer to the Guard because the Guard is training its own personnel to fill available positions. For example, in Illinois, the Guard has assigned the Blackhawk mission to a unit some distance away from the inactivating Reserve unit and is training Guard personnel to become Blackhawk qualified. The National Guard Bureau has requested that the state adjutant generals establish assignment advisory boards for aviation personnel, which would match available Reservists and Guard personnel with available positions and select those who are best qualified. As of January 1995, most states affected by the agreement have scheduled advisory boards. The Army National Guard recruited inactivating Reserve special forces personnel and added them to existing Guard units or to special temporary detachments it created. For example, the Guard created three detachments with an authorized strength of 83 persons each to accommodate personnel of the Reserve 12th special forces group. This arrangement places the Guard in an overstrength position with too many units, a situation that Army officials stated will be remedied in 18 months. During this time, the Guard plans to assess all Guard special forces units and retain those units having the highest readiness ratings and sustainability at the end of the test period. We learned of other initiatives to accommodate displaced personnel. For example, the Reserve is establishing 37 new units within 50 miles of inactivating Guard units. It plans to recruit the deactivating Guard personnel for these units. Included in the 37 are 6 watercraft units in Washington State, which are to assume the missions of deactivating Guard units. The Defense Appropriations Act for fiscal year 1995 directed the Secretary of the Army to ensure that members of units inactivating as a result of the agreement be reassigned to remaining units to the maximum extent practicable. It further directed the Secretary to submit semi-annual reports to the congressional defense committees on the number of members reassigned while the agreement is in effect. The Offsite Agreement places all reserve component special forces in the Guard, which is generally state-controlled during peacetime. We found no evidence that the Guard’s status would hinder the Special Operations Command’s training responsibilities under the Goldwater-Nichols Department of Defense Reorganization Act of 1986. The Goldwater-Nichols Department of Defense Reorganization Act of 1986 authorizes combatant commands to exercise command and control over their forces. As a combatant command, the U.S. Special Operations Command is responsible for preparing active and reserve component special operations forces to carry out assigned missions, including the training of assigned forces. As we reported in March 1994, special operations forces have become an integral part of the combatant commanders’ peacetime mission. Overseas training exercises are held frequently in support of this mission, and according to Command officials, reserve component forces are often called upon to participate in this training. For example, troops from the Mississippi, Maryland, and Alabama National Guards conducted training programs for military personnel and provided assistance to local citizens in Honduras in 1994. Further, the Guard’s participation in overseas training exercises is ensured as a result of a 1990 U.S. Supreme Court decision. This decision affirmed a federal law restricting governors from withholding consent for overseas training for Guard units put on active duty. The Department of Defense is formulating policy guidance that will clarify the relationship between the Guard and the combatant commands as established by law and will ensure the authority of Governors will not be limited over their National Guard forces when these forces are not in federal service. An Army National Guard official told us that the policy guidance should more clearly give combatant commands authority over training and readiness of assigned reserve component forces. In commenting on a draft of our report, the Department of Defense agreed with all of our findings except for our cost estimate to implement the Offsite Agreement. Specifically, the Department said that the $82.5 million in cost avoidance for the early inactivation of aviation units is attributable to the agreement and should be included in our estimate. We continue to disagree with the Department’s position. In February 1993, 10 months prior to the Agreement, the Army’s Aviation Restructuring Initiative directed the National Guard to inactivate over 600 helicopters because they were no longer needed to support National Guard missions. The Department said that the National Guard agreed to turn in helicopters earlier than required by the Initiative because of the agreement. The Department further said that the $82.5 million is attributable to savings in operations and maintenance due to the early turn-in schedule. The Department was not able to produce convincing evidence that the agreement had any impact on the National Guard’s turn-in schedule. Since the Guard was already required to turn in these aircraft, we continue to believe that the savings should not be attributed to the agreement. The Department’s comments are shown in appendix I. We reviewed the provisions of the agreement and the actions taken by the Army and reserve components to implement it. We spoke with Department of the Army, Army Reserve, and Army National Guard officials to obtain documents and other information pertaining to the cost and readiness implications of the agreement, the reserve components’ efforts to absorb displaced personnel, and the agreement’s impact on control of special forces in the reserve components. We also spoke with officials of the U.S. Special Operations Command to discuss the Command’s control over Guard units. We visited National Guard and Army Reserve Command offices and units in Pennsylvania and Illinois and an Army Reserve Command office in Missouri to discuss actions planned or underway to assist displaced personnel in finding new units. We also met with Army Reserve Association officials to discuss their views on the agreement. The association is not represented in the Offsite group. Our review was conducted between May and December 1994 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. The major contributors to this report are Robert Pelletier, Donald Campbell, Mae Jones, Paul O’Brien, and Frances Scott. Please contact me at (202) 512-3504 if you or your staff have any questions concerning this report. The following is a GAO comment on the Department of Defense’s letter dated February 10, 1995. 1. The $22.6 million for the early inactivation of overstructured aviation units is included in the $82.5 million we deleted from Army’s savings estimates. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Offsite Agreement, a major restructuring of the Army National Guard and Army Reserve, focusing on the agreement's: (1) implementation costs; (2) impact on readiness; (3) efforts to absorb displaced personnel; (4) effect on the implementation of the Goldwater-Nichols Department of Defense Reorganization Act of 1986; and (5) impact on the Special Operations Command's training of special Guard forces. GAO found that: (1) implementation of the Offsite Agreement could cost over $180 million; (2) the Army's latest cost estimate is about $85 million; (3) GAO believes that the Army's estimate excludes training costs that the Guard will likely incur and includes savings in operating costs that would have resulted regardless of the agreement; (4) it is too early to tell how the agreement will affect readiness for most units; (5) the Guard did not identify specific units that will assume the missions of 20 inactivating Reserve units; another 107 Reserve units are new and have 1 year to establish their readiness ratings; (6) GAO estimated the readiness impact for some units; 13 units will be replaced by units with lower readiness ratings, while 18 units will be replaced by units having the same or higher readiness ratings; (7) the Guard and Reserve have primarily left it up to the reserve component commands and individual units to help affected persons find new units; (8) in three areas already affected by the agreement (the 157th Separate Infantry Brigade, aviation units, and special operations units) some of the commands' and units' initiatives appear to be working well; (9) others, however, appear to discourage the transfer of personnel, even if a transfer would result in a more effective use of their skills; (10) senior and experienced officers and enlisted persons in inactivating units appear to have the most difficulty obtaining positions in other units in the Reserve and the Guard; (11) reserve helicopter pilots and technicians are also experiencing difficulties; and (12) GAO found no evidence indicating that the Special Operations Command will have problems exercising control over the training of Guard special operations forces.
Policymakers are increasingly viewing adaptation as a risk management strategy to protect vulnerable infrastructure that might be affected by changes in the climate. While adaptation measures—such as raising river or coastal dikes to protect infrastructure from sea level rise, building higher bridges, or increasing the capacity of stormwater systems—may be costly, there is a growing recognition that the cost of inaction could be greater. As stated in a 2010 NRC report, even though there are still uncertainties regarding the exact nature and magnitude of climate change impacts, mobilizing now to increase the nation’s adaptive capacity can be viewed as an insurance policy against climate change risks. In this context, it is important to understand (1) federal infrastructure investment, (2) the condition of existing infrastructure, (3) climate change adaptation as a risk management tool, and (4) the limited federal role in planning infrastructure projects. In total, the United States has about 4 million miles of roads; 30,000 wastewater treatment and collection facilities; and over 800,000 federal facilities such as military installations that provide for the nation’s defense and facilities where complex scientific and technological research is conducted. Collectively, this infrastructure connects communities, protects public health and the environment, and facilitates trade and economic growth, among other important functions. The nation’s highway and wastewater infrastructure is primarily owned and operated by state and local governments and the private sector. For instance, state and local governments own about 98 percent of the nation’s bridges. The federal government spends billions of dollars every year on transportation and wastewater infrastructure through a variety of funding mechanisms. According to a 2010 Congressional Budget Office report, total public spending on transportation and water infrastructure exceeds $300 billion annually, with roughly 25 percent of this amount coming from the federal government and the rest coming from state and local governments. infrastructure investments through federal assistance to states and local communities. For example, EPA’s Clean Water State Revolving Fund—a federal program that provides states and local communities with independent and sustainable sources of financial assistance, such as low- or no-interest loans to fund water quality projects—received an From 1956 to appropriation of just over $1.4 billion in fiscal year 2012.2007, the largest portion of annual public funding for transportation and water infrastructure was dedicated to highways. For the most part, the federal government supports these The federal government also owns and manages certain types of infrastructure. According to the General Services Administration, the federal government’s portfolio of building assets totaled approximately 3.35 billion square feet of space and over 800,000 building and structural assets with a total operating cost of $30.8 billion in 2010. This total includes large federal complexes such as NASA centers where scientists and engineers conduct research, design new aerospace technologies, and operate the International Space Station, among other activities. Congressional Budget Office, Public Spending on Transportation and Water Infrastructure, Pub. No. 4088 (Washington, D.C.: November 2010). NASA’s real property holdings include more than 5,000 buildings and other structures such as wind tunnels, laboratories, launch pads, and test stands, according to a December 2011 NASA Inspector General report.In total, these NASA assets represent more than $32 billion in current replacement value. The infrastructure examined for this report—roads, bridges, wastewater management systems, and NASA centers—was designed to last for decades. More specifically, according to the American Society of Civil Engineers, the average bridge in the United States is designed to last 50 years, and EPA data indicate that wastewater treatment plants typically have an expected useful life of 20 to 50 years before they require expansion or rehabilitation. Over 80 percent of NASA’s facilities are more than 40 years old and reaching the end of their designated life spans. Our past work and other studies have reported that much of the nation’s physical infrastructure is in poor condition and in need of repair or replacement. For example, as we reported in June 2010, many wastewater management systems were constructed more than 50 years ago and are reaching the end of their useful lives. Many of these systems do not have the capacity to treat increasing volumes of wastewater, particularly during periods of wet weather, leading to the release of untreated wastewater into water bodies. Citing such concerns, the American Society of Civil Engineers 2009 Report Card for America’s Infrastructure graded the condition of the nation’s wastewater infrastructure as “D-,” or between “poor” and “failing” on their rating scale.a “D-,” and a “C” for “mediocre,” respectively, due to identified structural deficiencies and other factors. Roads and bridges fared similarly in the report card, also earning Estimates to repair, replace, or upgrade aging roads, bridges, wastewater management systems, and federal facilities are in the hundreds of billions of dollars, not accounting for any additional costs to be incurred due to a changing climate, an important consideration for proposed adaptive measures that may require significant redesign, retrofitting, or replacement of planned or existing infrastructure in response to a changing climate. As we reported in May 2008, the current fiscal environment makes it even more important that federal, state, and local governments make prudent decisions on how to invest limited available Yet, in many cases, we resources as they address infrastructure needs. reported that federal infrastructure investment decisions are still based on conditions, priorities, and approaches that were established decades ago and are not well suited to addressing complex, crosscutting, and emerging challenges like climate change. Our May 2008 report identified principles that could guide a reexamination of federal infrastructure programs. These principles include creating well-defined goals based on identified areas of national interest, establishing and clearly defining the federal role in achieving each goal, incorporating performance and accountability into funding decisions, and employing the best tools and approaches to emphasize return on investment. GAO, Physical Infrastructure: Challenges and Investment Options for the Nation's Infrastructure, GAO-08-763T (Washington, D.C.: May 8, 2008). See also GAO, Transportation: Key Issues and Management Challenges, GAO-12-581T (Washington, D.C.: Mar. 29, 2012). for, absorb, recover from, and more successfully adapt to adverse events. Enhanced resilience results from better planning to reduce losses, rather than waiting for an event to occur and paying for recovery afterward. America’s climate change adaptation choices involve deciding how to cope with climate changes that we cannot, or do not, avoid so that possible disruptions and damages to society, economies, and the environment are minimized and—where possible—so that impacts are converted into opportunities for the country and its citizens. In some cases, such as in Alaska, the need to adapt has already become a reality. In most cases, however, adapting today is about reducing vulnerabilities to emerging or future impacts that could become seriously disruptive if we do not begin to identify response options now; in other words, adaptation today is essentially a risk management strategy. Further, as we reported in 2009, given the complexity and potential magnitude of climate change and the lead time needed to adapt, preparing for these impacts now may reduce the need for far more costly steps in the decades to come. Of particular importance are planning decisions involving physical infrastructure, which require large capital investments and which, by virtue of their expected life span, will have to be resilient to changes in climate for many decades. Substitutes for infrastructure could also affect adaptation decisions; damages from disruptions due to climate change would be greater, all else equal, when fewer alternatives are available. The long lead time and long life of large infrastructure investments require planning decisions to be made well before further climate change effects are discernible. Risk management is not a new concept, and it is used extensively almost anywhere decision makers are faced with incomplete information or unpredictable outcomes that may have negative impacts. Broadly defined, risk management is a strategic process for helping policymakers make decisions about assessing risk, allocating finite resources, and taking actions under conditions of uncertainty. Leading risk management guidance recommends a sequence of activities similar to the one described in the International Organization for Standardization (ISO) 31000:2009 standards on risk management. Specifically, these standards recommend that organizations such as federal agencies develop, implement, and continuously improve a framework for integrating risk management into their overall planning, management, For risk management to be effective, reporting processes, and policies. these standards state that an organization should at all levels comply with the following principles: Risk management is not a stand-alone activity that is separate from the main activities and processes of the organization. Risk management is part of the responsibilities of management and an integral part of all organizational processes, including strategic planning and all project and change management processes. Risk management is part of decision making. Risk management helps decision makers make informed choices, prioritize actions, and distinguish among alternatives. Risk management explicitly addresses uncertainty. Risk management explicitly takes account of uncertainty, the nature of that uncertainty, and how it can be addressed. Risk management is based on the best available information. The inputs to the process of managing risk are based on information sources such as historical data, experience, stakeholder feedback, observation, forecasts, and expert judgment. However, decision makers should inform themselves of, and should take into account, any limitations of the data or modeling used or the possibility of divergence among experts. Concerning risk management for federal facilities, OMB issues guidance for agencies on managing non-information technology capital assets that contains risk management criteria. Under OMB’s guidance, agencies are to complete a business case for physical infrastructure investment that includes sections on alternatives analysis and risk management. Risk must be actively managed throughout the life cycle of the investment, and a risk management plan must be available to OMB upon request. The federal government has an inherently limited role in the project-level planning processes central to adapting infrastructure to climate change because these are typically the responsibility of state and local governments—except when federal assets are involved. State and local authorities are primarily responsible for prioritizing and supervising the implementation of water and highway infrastructure projects; therefore, the federal role in these processes is limited. As specified by law, federal programs for funding roads, bridges, and wastewater infrastructure generally operate as formula grants or similar mechanisms with few explicit requirements to consider climate change in infrastructure projects. For example, federal funding for highways is provided to the states mostly through a series of formula grant programs collectively known as the federal-aid highway program. As we have reported, the Federal Highway Administration has faced challenges in ensuring that federal funds are efficiently and effectively used because the highway program is one in which there is limited federal control—it is a state-administered, federally assisted program. Funds are largely apportioned by formula, and the states enjoy broad flexibility in deciding which projects are supported. Furthermore, for nearly half of federal-aid highway funds, the Federal Highway Administration’s responsibility to oversee the design and construction of projects has been assumed by the states. Similarly, EPA officials told us that their ability to influence states to adapt to climate change through the Clean Water State Revolving Fund is limited and that each state is responsible for administering its own revolving funds. However, certain federal infrastructure programs may begin to consider adaptation in their project-level planning activities. For example, the Moving Ahead for Progress in the 21st Century Act—which was signed into law on July 6, 2012, and authorized over $105 billion in appropriations for surface transportation programs for fiscal years 2013 and 2014—authorizes federal funding to be used for bridge and tunnel projects that protect against extreme events. Another example is funding appropriated in the American Recovery and Reinvestment Act of 2009 (Recovery Act) for EPA’s Green Project Reserve under the Clean Water State Revolving Fund and EPA’s Drinking Water State Revolving Fund programs. The Recovery Act appropriated $4 billion for the Clean Water State Revolving Fund and required that not less than 20 percent of these funds—if there are sufficient eligible project applications—be used for projects to address green infrastructure or other environmentally innovative activities. According to EPA, this requirement, known as the Green Project Reserve, has funded projects that facilitate adaptation of clean water facilities to climate change, including green infrastructure and other climate-related and environmentally innovative activities, such as water and energy conservation. According to NRC and USGCRP assessments, changes in the climate have been observed in the United States and its coastal waters and are projected to grow in severity in the future, thereby increasing the vulnerability of infrastructure such as roads and bridges, wastewater management systems, and NASA centers. As shown in table 1, changes in the climate—including warmer temperatures, changes in precipitation patterns, more frequent and intense storms and extreme weather events, and sea level rise—affect roads and bridges, wastewater management systems, and NASA centers in a variety of ways, according to NRC and USGCRP. Infrastructure is typically designed to withstand and operate within historical climate patterns. However, according to NRC, as the climate changes and historical patterns—in particular, those related to extreme weather events—no longer provide reliable predictions of the future, infrastructure designs may underestimate the climate-related impacts to infrastructure over its design life, which can range as long as 50 to 100 years. These impacts can increase the operating and maintenance costs of infrastructure or decrease its life span, or both, leading to social, economic, and environmental impacts. The vulnerability of infrastructure to changes in the climate varies by category and location, as illustrated by our seven site visits, examples from additional interviews we conducted, and assessments we reviewed focused on three infrastructure categories—roads and bridges, wastewater infrastructure, and NASA centers. Climate change will have a significant impact on the nation’s roads and bridges, according to assessments by NRC, USGCRP, and others. Transportation infrastructure is vulnerable to extremes in precipitation, temperature, and storm surges, which can damage roads, bridges, and roadway drainage systems. For example, if soil moisture levels become too high with increased precipitation, the structural integrity of already aging roads, bridges, and tunnels could be compromised. In addition, USGCRP’s 2009 assessment notes that increased precipitation is likely to increase weather-related accidents, delays, and traffic disruptions in a transportation network already challenged by increasing congestion. Evacuation routes are likely to experience increased flooding, and more precipitation falling as rain rather than snow in winter and spring is likely to increase the risk of landslides, slope failures, and floods from the runoff, causing road closures, as well as the need for road and bridge repair and reconstruction. According to technical comments from EPA, increased precipitation could also overwhelm roadside stormwater systems, causing flooding of homes and businesses. Increases in temperature extremes are projected to generate more freeze-thaw conditions, creating potholes on road and bridge surfaces and resulting in load restrictions on certain roads to minimize damage, according to a 2008 NRC study.heat may compromise pavement integrity by softening asphalt and In addition, longer periods of extreme increasing rutting (i.e., sunken tracks or grooves made by the passage of vehicles). Storm surge, combined with sea level rise, is projected to generate a wide range of negative impacts on roads and bridges. For example, according to the 2008 NRC study, storm surges are projected to increasingly inundate coastal roads, cause more frequent or severe flooding of low- lying infrastructure, erode road bases, and “scour” bridges by eroding riverbeds and exposing bridge foundations. From an operational perspective, increased storm surges are projected to cause more frequent travel interruptions, especially in low-lying and coastal areas, and necessitate more frequent evacuations, according to the study. The following are specific examples of the observed and projected effects of climate change on roads and bridges from the sites we visited. Washington state transportation officials told us that they expect that Washington State Route 522, about 35 miles northeast of Seattle, Washington, will be vulnerable to hydrologic changes resulting from changing temperatures and precipitation. More specifically, as the climate warms and glaciers melt, they expect increased sediment loads in nearby waterways. In addition, changes in rain and snow patterns are expected to alter river flows, which have already caused problems at Washington State Route 522’s Snohomish River Bridge, located in a vulnerable position downstream of the convergence of the flash flood prone Skykomish River and the slower-moving Snoqualmie River. Due to flash flooding in this river system, state transportation officials said that they have had to repair scour damage at the Snohomish River Bridge. When designing the new project to replace the bridge and widen this 4-mile stretch of Washington State Route 522, transportation officials told us that they anticipated that hydrologic changes would continue to pose scour risks to the bridge. The two other road and bridge locations we visited highlight their vulnerability to storms and relative sea level rise—the combination of global sea level rise and changes in land surface elevation resulting from land loss through subsidence, or the sinking of land that can lead to submergence. Specifically, the Interstate 10 Twin Span Bridge, which crosses Lake Pontchartrain near New Orleans, and the southern portion of Louisiana State Highway 1 are both located in the low-lying central Gulf Coast region. This region is already prone to flooding during heavy rainfall events, hurricanes, and tropical storms, and USGCRP assessments expect that the region will become increasingly susceptible to inundation as barrier islands erode and subside into the Gulf of Mexico. In its 2008 study, USGCRP estimated that the region could experience as much as 6 to 7 feet of relative sea level rise in Louisiana and East Texas, an area home to a dense network of transportation assets. According to this study, the “middle range” of potential sea level rise (2 to 4 feet) indicates that a vast portion of the Gulf Coast from Houston to Mobile may be inundated over the next 50 to 100 years. The Twin Span Bridge has already been damaged by one extreme weather event—Hurricane Katrina. In 2005, Hurricane Katrina generated a large storm surge across Lake Pontchartrain, lifting many of Twin Span’s 255-ton concrete bridge spans off of their piers, as shown in figure 1. Some of the spans toppled into the lake while others were seriously misaligned. The interactive graphic figure 2, below, illustrates how storm surge combined with wind-driven waves to knock the spans off their piers. Click here to activate the graphic in a Web browser on your computer, and then select the “Katrina Flashback” box to access the animation. The sections of Louisiana State Highway 1 we visited are also in a particularly vulnerable location near the Gulf of Mexico, according to locally based federal and state officials. The highway provides the only road access to Port Fourchon, which services virtually all deep-sea oil operations in the Gulf of Mexico, and the Louisiana Offshore Oil Port, the nation’s only deepwater oil port capable of unloading very large crude carriers. Collectively, Louisiana State Highway 1 currently supports 18 percent of the nation’s oil supply. Flooding of this road effectively closes the port. According to NOAA officials, relative sea level rose an average of about 0.4 inches annually from 1947 to 2006 at a tidal gauge in nearby Grand Isle, LA. This is equivalent to a change of approximately 3 feet in 100 years, which a NOAA official described as one of the highest rates of relative sea level rise in the world. Currently, Louisiana State Highway 1 is closed an average of 3.5 days annually due to inundation. However, within 15 years, NOAA anticipates that the at-grade portions of Louisiana State Highway 1 will be inundated by tides an average of 30 times annually even in the absence of extreme weather. Because of Port Fourchon’s significance to the national, state, and local oil industry, the U.S. Department of Homeland Security, in July 2011, estimated that a closure of 90 days could reduce national gross domestic product by $7.8 billion. In addition to these anticipated economic impacts, local officials also said that they are concerned about the safety of area residents and workers who rely on Louisiana State Highway 1 as their sole evacuation route during extreme weather events. Workers travelling between the port and their homes must navigate a low-lying segment of Louisiana State Highway 1, parts of which were built 4 feet above sea level in an area where current high tide levels are 2.5 feet above sea level. Figure 3 shows Louisiana State Highway 1 leading to Port Fourchon. Climate change will have a significant impact on the nation’s wastewater management infrastructure—including treatment plants and wastewater collection systems, according to studies from wastewater professional associations and EPA, and an assessment from USGCRP. Representatives from the National Association of Clean Water Agencies and EPA officials we interviewed said that the most direct impacts of climate change involve more frequent flooding and damage to wastewater infrastructure. Climate changes that alter the local hydrology—such as sea level rise, especially when combined with higher storm surges, increased precipitation amounts, or more frequent and intense downpours—can cause increased flooding and inundation of wastewater management infrastructure, according to the professional association study and USGCRP’s 2009 National Climate Assessment. Stronger storms, which USGCRP projects in some locations, may exacerbate these impacts. Wastewater infrastructure is particularly vulnerable to climate change impacts because it is commonly built in low-lying areas near a body of water and because it is designed for historically observed hydrologic conditions that may not be as relevant for future scenarios. Some locations could experience other, less direct, climate change impacts from higher temperatures or drought conditions that alter the characteristics of the wastewater flowing into a treatment plant—for example, by concentrating pollutants or increasing water temperatures— thereby reducing the effectiveness of a system’s treatment processes that were designed for different characteristics. In addition, treatment plants may need to adopt alternative strategies to managing discharge of treated or partially treated effluent if the condition of receiving water is altered by climate impacts, according to technical comments from EPA. For example, according to EPA’s comments, the flow of the receiving water body may be too low to dilute discharge enough to meet water quality standards. Climate change impacts to wastewater management systems can increase treatment costs, increase maintenance and replacement costs, and compromise biological treatment systems resulting in impaired water quality. In the worst cases, according to EPA officials, climate change impacts could cause a system to fail, creating risks to public health. Potential climate change impacts on wastewater management systems are not limited to coastal areas, since changes in precipitation and extreme events could affect wastewater management systems across the country. According to USGCRP’s 2009 National Climate Assessment, the amount of rain falling in the heaviest downpours has increased approximately 20 percent on average in the past century, and this trend is very likely to continue, with the largest increases in the wettest places. During the past 50 years, the greatest increases in heavy precipitation occurred in the Northeast and the Midwest. Besides flooding and related storm damage at treatment plants, increased precipitation creates problems for combined and separate sewer systems that collect and carry sewage to treatment facilities. Specifically, these precipitation changes can increase the amount of runoff, which by design combines with sewage in a combined sewer system, and can lead to increased infiltration and inflow into aging separated systems. These increases can overwhelm the capacity of sewer systems, causing overflows that bypass treatment and result in the discharge of untreated wastewater into receiving water bodies. Wastewater management systems are typically designed to provide a specific level of service based on a number of design factors that include a particular storm frequency, duration, and intensity. For example, according to one set of commonly used design standards, treatment plant components are typically designed for 25-, 50-, or 100-year storms. Changes in characteristics of strong storms—for instance, a storm that historically occurred once every 100 years may occur every 50 years in the future—could cause wastewater management systems to be overwhelmed more frequently. Climate change impacts have added to existing stresses—including aging infrastructure and urbanization—that already tax the capacities of many of the country’s wastewater management systems and challenge communities’ ability to pay for them. Specific impacts that have been observed in the two locations we visited are discussed in the following sections. As EPA states on its combined sewer overflow web page (see here), combined sewer systems are sewers that are designed to collect rainwater runoff, domestic sewage, and industrial wastewater in the same pipe. Most of the time, combined sewer systems transport all of their wastewater to a sewage treatment plant, where it is treated and then discharged to a water body. storm events to develop different scenarios of future tide heights. These tide height scenarios were combined with the elevations of King County’s system facilities to identify those at risk of onsite flooding. As shown in figure 4, King County has many facilities—including treatment plants, regulator stations, pump stations, and other components—in tidally influenced areas. The lowest of these facilities—Barton Pump Station, 8th Avenue Regulator Station, Brightwater Flow Meter Vault and Sampling Facility, and Elliott West Combined Sewer Overflow Treatment Plant—lie less than 15 feet above sea level. The 2008 vulnerability study concluded that more than 30 major facilities in King County are at varying levels of risk from sea level rise and storm surge, depending on the rate at which the rise occurs and the probability of an extreme storm event. For example, according to the study, the Barton Pump Station, 8th Avenue Regulator Station, and Brightwater Flow Meter Vault and Sampling Facility—all of which have an elevation of 13 feet—are projected by be flooded every 2 years by 2050 under a high sea level rise scenario (approximately 1.8 feet). Due to past problems with sewer overflows, the Milwaukee Metropolitan Sewerage District in Wisconsin significantly increased the capacity of its sewer system. As shown in figure 5, the district completed a $3 billion dollar project in 1993 that included construction of a “deep tunnel” to add additional wastewater storage capacity to its combined and separated sewer systems and decrease the likelihood of combined sewer overflows. In the past, according to Milwaukee Metropolitan Sewerage District officials, this project and the district’s other sewer system design decisions were based on a 64-year historical rainfall record from 1940 to 2004. These officials stated that the Milwaukee region’s robust sewer infrastructure helps make its system less vulnerable to changes in precipitation that may result from climate change. However, during our site visit, Milwaukee Metropolitan Sewerage District officials stated that even this more robust infrastructure is vulnerable to projected changes in the climate. In recent years, the Milwaukee region has experienced several extreme precipitation events and, in 2011, scientists at the University of Wisconsin projected that these types of storms will become more common in the future. Specifically, the scientists projected that storm frequency and intensity will increase in early spring, a time during which the sewers are more vulnerable to overflows due to frozen ground conditions that limit infiltration and cause more runoff. Increases in spring precipitation associated with climate change could exceed the capacity of the system and increase the volume and frequency of sewer overflows in the Milwaukee region by mid- century, according to the scientists. As presented in table 2, NASA centers are vulnerable to climate change in several respects, but potential impacts vary depending upon geographic location. NASA’s centers and associated sites each have different missions and geographic characteristics that affect their vulnerability to climate change. As shown in figure 6, many of NASA’s field centers and component sites are near an ocean shoreline. In fact, over two-thirds of NASA’s constructed real property value (about $20 billion) is within 16 feet of sea level, according to a 2012 NASA climate change presentation. NASA is developing the institutional capacity to identify the risks posed to its centers by climate change through a series of multiday climate risk workshops, including two we attended in September 2011 and March 2012. The workshops are intended to, among other functions, share climate information specific to each center with agency officials— including headquarters officials, center leadership, and center managers responsible for overarching “systems” that support mission and operations, such as the center’s electrical distribution network—and community stakeholders such as local planning officials. Through these workshops, NASA climate scientists and center personnel have assembled site specific observed and projected changes in the climate for selected centers, and have begun grappling with potential climate impacts on these facilities. We describe two centers we visited—Johnson Space Center and Langley Research Center—in more detail in the following sections, as well as selected emerging efforts within DOD, which has several facilities in close proximity to Langley Research Center: According to NASA documents obtained at the March 2012 workshop, Johnson Space Center leads NASA’s flight-related scientific and medical research efforts, and its professionals direct the development, testing, production, and delivery of U.S. human spacecraft and human spacecraft- related functions, including training space explorers from the United States and Space Station partner nations, including International Space Station crews. As shown in figure 7, the center is located on nearly 1,700 acres in Houston, Texas, near Galveston Bay and the Gulf of Mexico. Ellington Field, part of Johnson Space Center, lies northwest of the center. Johnson Space Center’s facilities are conservatively valued at $2.3 billion, and include the following: 163 inhabited structures; 4 million square feet of office space; 3 miles of underground tunnels; 8.3 miles of roadways; 142 labs and simulators; and 2 national historic landmarks, including Apollo Mission Control Center. Among these facilities, its mission control center is often referred to as the nerve center for America’s human space program. A specialized pool at the Sonny Carter Training Center near Ellington Field simulates zero gravity or weightless conditions experienced by spacecraft and crew during space flight. In addition, more than $4.0 billion of federal aerospace contracts are now managed out of Johnson Space Center, providing a local payroll of more than $1.9 billion annually. More than 15,000 people work within the center, including about 3,300 civil servants. Climate data collected over the past 100 years in the Houston-Galveston area show a long-term pattern of relative sea level and temperature rise, according to NASA climate scientists who presented information at the March 2012 workshop. Climate models project continued relative sea level rise and warmer temperatures in the region, according to these scientists. Because of its location on the Gulf Coast, storm surge and sea level rise may be the biggest climate threats to Johnson Space Center, according to documents prepared by NASA climate scientists. Land subsidence also worsens the impacts of rising seas and storm surge. NASA climate scientists stated that, while little change is expected in average annual precipitation, precipitation could come at different intervals, and individual precipitation events may become stronger, leading to increased risks of flash flooding. In addition, according to NASA data, the number of days per year exceeding 90 degrees Fahrenheit is projected to rise dramatically in the coming century. The projected changes in the frequency of some extreme events like hot and cold days shown in table 3 would likely affect energy use and the number of hours staff can work outside. According to NASA documents obtained at the September 2011 workshop, Langley Research Center was founded in 1917 as the first civil aeronautical research lab, and its unique research and testing facilities make critical contributions to the development of NASA’s next generation of heavy-lift rockets and capsules for future space exploration.shown in figure 8, Langley Research Center occupies nearly 800 acres in Hampton, Virginia, near the mouth of the Chesapeake Bay. The Port of Hampton Roads is the nation’s third largest seaport, and the surrounding area has a strong federal presence in addition to the center, including Army, Navy, Air Force, Marines, and Coast Guard facilities. As shown in figure 8, Langley Research Center borders the Northwest Branch and Southwest Branch of the Back River, which flows east to the Chesapeake Bay. Most of its acreage is located to the west of Langley Air Force Base, with several small parcels to the east within the base. Decision makers have not systematically incorporated potential climate change impacts in infrastructure planning for roads, bridges, and wastewater management systems, according to representatives we spoke with from professional associations and officials from agencies that represent or work with these decision makers. Instead, efforts to incorporate climate change impacts into planning for infrastructure projects have occurred primarily on a limited, ad hoc basis. The association representatives and agency officials told us and NRC has reported that decision makers in the infrastructure categories we examined have generally not included adaptive measures in their planning because: (1) they typically focus their attention and resources on competing, shorter-term priorities; (2) they face challenges identifying and obtaining available climate change information best suited for their projects; (3) they often do not know how to access local assistance; or (4) available climate change information does not fit neatly into their infrastructure planning processes. Representatives from professional associations we spoke with said that nearer-term competing priorities make it difficult for decision makers to address the impacts of climate change, since many state and local governments responsible for the infrastructure face immediate funding and staffing challenges. In many cases, according to these representatives and reports from the Transportation Research Board (TRB) of the NRC and National Drinking Water Advisory Council, adaptation is a relatively low priority compared with more traditional and immediate concerns such as managing aging infrastructure systems, sustaining current levels of service, protecting public health and safety and the environment, and maintaining service affordability.of wastewater infrastructure, for example, available funding is often inadequate to implement climate adaptation actions on top of more pressing needs such as meeting permit requirements, upgrading wastewater treatment plants, and preparing to implement proposed stormwater rules, according to officials from the National Association of In the case Clean Water Agencies and a December 2010 report from the National Drinking Water Advisory Council. Due to the immediacy of many competing priorities and current funding constraints, decision makers tend to delay addressing climate change adaptation—the benefits of which may not be realized for several decades into the future. processes and their associated funding cycles occur on time horizons poorly matched to the longer view sometimes required to discern the effects of climate change and identify the benefits of adaptation. For example, as noted in the TRB report, the longest-term planning horizons for many transportation planners rarely exceed 30 years—20 to 25 years is the norm. Yet, according to this report, the inherent variability of the climate makes it difficult to discern climate change trends over periods less than approximately 25 years. Consequently, many transportation planners perceive that the impacts of climate change will be experienced well beyond the time frame of their longest-term plans, not realizing that climate changes could already be occurring and that investment decisions made today will affect how well the infrastructure accommodates these and future changes over its design life. According to technical comments from CEQ, OSTP, and USGCRP, several of the examples listed in this section (meeting permit requirements, upgrading wastewater treatment plants, and preparing to implement proposed stormwater rules) are driven by federal agencies and federal actions, leading to a way federal agencies could incentivize and encourage more of a focus on this issue. Decision makers often face challenges obtaining the best available climate-related information relevant to their decision-making process. According to NRC studies and decision makers and other infrastructure stakeholders we interviewed, decision makers are unsure about where to go for information and what information they should use because (1) vast amounts of information come from multiple, uncoordinated sources and (2) the quality of the information varies. Decision makers often struggle to identify which information among the vast number of climate change studies available is relevant, according to NRC studies and our interviews with federal agencies and other stakeholders. NRC researchers, federal officials, and other stakeholders reported that a vast amount of climate change information—including climate modeling results and observational datasets—is available from the independent efforts of federal and state agencies, universities, professional associations, and others. However, this information is typically made available to decision makers through what NRC described in 2012 as a “loading dock” model, which assumes that simply producing more scientific findings will improve the quality of decisions. to the NRC study, this information is reported in studies made available through peer reviewed publications and placed on the public “loading dock,” where decision makers are expected to retrieve and interpret the studies for their purposes. NRC, Committee on a National Strategy for Advancing Climate Modeling, Board on Atmospheric Studies and Climate, Division on Earth and Life Sciences, A National Strategy for Advancing Climate Modeling (Washington, D.C.: 2012). NRC, America’s Climate Choices: Panel on Informing Effective Decisions and Actions Related to Climate Change, Informing an Effective Response to Climate Change (Washington, D.C.: 2010). climate change literature, or can spend a great deal of time trying to find useful information. For example, one decision maker we interviewed noted that identifying the relevant aspects of the constant stream of scientific papers he receives is akin to “picking needles out of the hay.” According to the 2010 NRC report, the end result of this information not being easily accessible is that people may make decisions—or choose not to act—without it. Given the large volume of climate-related information, decision makers also struggle to identify which information is of the best quality. In many instances, according to the 2012 NRC report on climate models, decision makers often do not have sufficient information to appreciate the strengths and weaknesses of different information because differences and uncertainties among datasets and their usefulness for different purposes may not be documented. As a result, decision makers must assess the quality of information themselves and figure out how to appropriately and reliably use the results. According to one representative from the Georgetown Climate Center, decision makers at the local level may be interested in incorporating climate change into their planning and design decisions but are nervous to do so because they do not know how to assess the quality of the information. Decision makers face difficulty accessing local assistance as they consider adaption options. According to a 2010 NRC study, no one-size- fits-all adaptation option exists for a particular climate impact because climate change vulnerabilities can vary significantly by infrastructure category, region, community, or institution. In other words, all adaptation is local.planning, not climate science—need assistance from experts who can help them translate available climate change information into something that is locally relevant. However, decision makers face difficulty accessing such local assistance because (1) individuals qualified to translate science to decision makers are in short supply and (2) when qualified translators do exist, decision makers do not know how to find them. Decision makers—who, in this case, specialize in infrastructure Climate information translators are in short supply. As NRC reported in 2010, a limited number of people are qualified to communicate science in ways that are useful to decision makers who are considering options for climate change adaptation. Decision makers need to work with an individual who has knowledge of the present state of climate science and ability to access climate data, interpret them in a local context, and help them understand the implications of those data and attendant uncertainties, according to a 2012 NRC study on climate models. As more and more communities become aware of the potential need for adaptation, intermediaries who can help bridge the gap between decision makers who want to use climate change information and the scientists who produce it are increasingly in demand. However, according to a 2011 NOAA report, meeting this increased demand presents challenges because academic institutions do not typically recognize “use-inspired” knowledge developed in collaboration with practitioners and decision makers as activities meeting academic standards for tenure, which may In addition, discourage researchers from developing such expertise.some of the stakeholders we interviewed noted that, while they saw a local demand for outreach efforts to bridge the communication gap between decision makers and climate scientists, few federal programs are designed to support such activities. Decision makers do not know where to find climate information translators. Decision makers face a challenge finding experts who can help them understand and use available climate change information. Several stakeholders we interviewed told us that federal science agencies are not in tune with the information needs of different sectors, and the disparate sources of expertise leave users confused about where to turn for help. As stated by a May 2012 NOAA-sponsored study, for most decision makers “it is not obvious who to contact for what they need, be it data, information, models or technical assistance.” Even where good scientific information is available, it may not be in the actionable, practical form needed for decision makers to use in planning and designing infrastructure. Such decision makers work with traditional engineering processes, which often require very specific and discrete information, but scientists commonly produce climate-related information without these explicit needs in mind. Consequently, according to professional association representatives, decision makers often do not have “actionable science” of the type and scale they need to make infrastructure decisions. Specifically, (1) infrastructure decision makers need climate information at a regional or local geographic scale, but climate information has generally been produced at a global or continental scale; (2) infrastructure design decisions are made using data on the frequency and severity of extreme events, but climate information is typically presented as changes in average conditions; and (3) traditional engineering practices rely on using backward-looking historical data, whereas climate change projections are inherently forward-looking and uncertain. Information mismatch in geographic scale. As reported by NRC in 2009, the geographic scale at which climate change information is typically available can present serious challenges for its usefulness to decision In general, climate change projections have focused on the makers.global or continental scale, but the vast majority of infrastructure decision makers require information at the regional or local scale. For example, a bridge designer may require information about how climate change will impact the flow of a specific river that a bridge crosses. To generate such information at the required scale, various “downscaling” methods exist. However, these methods introduce an additional level of uncertainty, and “downscaled” information is not available for all locations because of modeling resource constraints. Climate averages versus extremes. Climate change projections tend to focus on average changes in climate variables, such as temperature and precipitation, and are not sophisticated enough to adequately characterize extreme events, which drive the design criteria for infrastructure, according to studies we reviewed and stakeholders we interviewed. Representatives of the American Society of Civil Engineers told us that climate and weather modeling indicate that extremes may become more frequent or severe, but that such modeling does not make this information sufficiently quantitative to serve as the basis for design, operation, and maintenance decisions. According to these engineers, information on future extreme events expected to occur during the service life of infrastructure is a critical component in designing more resilient infrastructure. However, according to technical comments from CEQ, the Office of Science and Technology Policy (OSTP), and USGCRP, although knowing the magnitude of future extremes would be useful, it is not necessary, for example, to know exactly how extreme precipitation will be in the future to know that larger culverts need to be used than were used in past road design. Forward- versus backward-looking. Climate change projections are inherently forward-looking and uncertain, but traditional engineering processes rely on historical information. In addition, as reported by NRC in 2012, such climate change projections commonly provide a range of possible future outcomes. For example, available information may indicate that, in a particular area, intense downpours will become more frequent over the coming decades and provide a range of possibilities for the timing and magnitude of the increase. However, as stated by representatives of the American Society of Civil Engineers that we interviewed, existing infrastructure planning processes, and the design standards they rely on, require climate data with known and static probability distributions, such as the magnitude of a 100-year storm as determined by a historical record of precipitation. In fact, engineers use statistical tables of historical precipitation intensity, duration, and frequency developed by NOAA that, in some cases, have not been updated since the 1960s. In light of these issues, according to the American Society of Civil Engineer representatives, climate change projections are a long way from being translatable into engineering standards of practice. As a result, NRC, in 2010, reported that adapting the nation’s infrastructure to climate change will require new approaches to engineering analysis, such as using risk management to take uncertainties into account. In technical comments, CEQ, OSTP, and USGCRP noted that this may overstate the issue because even historical data contain uncertainty in the timing and intensity of events, and engineering processes already account for other factors that are projected with uncertainty such as changing development patterns and population growth. Notwithstanding the challenges that have deterred most decision makers from integrating climate change considerations into infrastructure planning processes, we identified and visited several locations where some decision makers overcame these challenges. Key factors enabled these decision makers to successfully integrate climate change into their infrastructure project planning. Decision makers at the seven locations we visited were able to integrate climate-related information into infrastructure project planning to varying degrees. These locations exhibited considerable diversity in the types of infrastructure at issue, geographic settings, and other circumstances. The adaptive measures themselves did not involve major overhauls of project plans or infrastructure systems but instead provide examples of practical responses to observed or projected climate-related impacts. Decisions to adapt infrastructure to climate change may depend on its remaining useful life, among other factors, because adaptation can be relatively more expensive when undertaken retroactively than at the design phase of a project. It is important to note that climate change was not always the primary reason for changing the infrastructure projects in these examples. Rather, the examples illustrate a shift in thinking where climate change is considered one of many hazards accounted for in planning and implementation. Interstate-10 Twin Span Bridge (Louisiana) As discussed above, the Interstate-10 Twin Span Bridge, which crosses Lake Pontchartrain outside New Orleans, Louisiana, is vulnerable to storm surge caused by hurricanes. Following failure of the old bridge during Hurricane Katrina, Louisiana state transportation officials decided to raise and strengthen the new Twin Span Bridge to protect against future storms—specifically to protect the structure against storm surges of similar strength to Hurricane Katrina, the largest storm surge on record for Lake Pontchartrain. When deciding how to manage risk over the bridge’s intended 100-year life span, the Twin Span’s design team considered many factors, such as durability, cost, and long-term maintenance. The design team ultimately decided to make a larger initial investment and build a stronger bridge to minimize future maintenance problems and expenses. The new bridge cost more than $700 million and was fully funded by federal emergency relief funds. Decision makers integrated several adaptive measures into the new bridge’s design. As shown in figure 9, these measures included the following: Opening railings to reduce wave forces on the bridge’s deck. Raising piers above historic peak wave heights, which involved raising the new bridge 23 feet above the old bridge elevation. Lengthening piles, long columns driven deep into the soil to support the bridge, to accommodate larger anticipated wave loads. Introducing rigid connections made of formed concrete to prevent the deck from floating off bridge piers, which occurred during Hurricane Katrina. Strengthening bridge-supporting girders with higher density high- performance concrete. This is expected to increase the bridge’s resilience to saltwater in Lake Pontchartrain, according to Louisiana state transportation officials. According to officials from the Louisiana Department of Transportation and Development, these adaptive measures performed well during Hurricane Isaac in 2012, the first major storm to hit the new bridge since it opened to the public. When we visited after Hurricane Isaac, there were few visible impacts on the bridge structure. Although the storm surge from Isaac submerged the approaches to the bridge (i.e., the part of the bridge that carries traffic from land to the main parts of the bridge) and eroded adjacent land, the storm’s impact on the bridge itself was limited to damaged signage and electrical components. Louisiana transportation officials noted that the new Twin Span’s resilience during Isaac highlights the importance of designing resilient long-lived infrastructure. Louisiana State Highway 1 is vulnerable to storm surge given sea level rise, land subsidence, and its close proximity to the open water and the Gulf of Mexico, as previously explained. A coalition of state and local officials worked together to obtain funding to raise an 11-mile segment of the highway by 22.5 feet to protect the road from 100-year flood events. To further protect the road from storm surge, bridge designers used restraining devices and anchor bolts to prevent the road deck from dislodging from the rest of the structure in the event of a large storm surge. Figure 10 presents a rendering of the new, raised road that was opened to traffic in 2009 (on the left) in relation to the old, unraised road (on the right). The raised segment of Louisiana State Highway 1 was largely unaffected by Hurricane Isaac—the first major hurricane to hit since the raised segment was open to the public. Some signs were damaged, but the raised section’s superstructure, which includes the girders, was unaffected despite the approximately 6.5-foot storm surge measured at Port Fourchon, according to local transportation and port officials we spoke to during an on-site, follow-up visit. In contrast, the unraised sections of the highway both north and south of the raised road were damaged. Figure 11 documents Hurricane Isaac-related flooding on the unraised section of Louisiana State Highway 1 north of the raised road. Washington State Route 522 and its Snohomish River Bridge are vulnerable to projected increases in precipitation and flash flooding, which may lead to increased bridge scour and roadbed damage. In 2008, the Washington State Department of Transportation completed environmental reviews for a major construction project along Route 522 to improve safety and reduce congestion. During the design, state officials integrated several measures in the project that both reduced the project’s impact on the environment and increased its resilience to projected climate change impacts. Figure 12 illustrates some of the measures integrated into the project design. Specifically, at the Snohomish River Bridge site, engineers deepened bridge footings—the enlarged portions of bridge foundations that rest directly on soil, bedrock, or piles—to protect against the effects of changes in the flow of the river. Engineers also placed bridge piers at least 10 feet above documented peak flows and aligned the bridge at the least vulnerable location along the river. Furthermore, state transportation officials built five stormwater treatment areas and eight water retention ponds that will serve the dual purposes of controlling and treating storm water flows, and plan to increase the size of two drainage culverts, to (1) mitigate the project’s impact on the surrounding environment by allowing wildlife to cross between habitat areas and improving fish access; (2) protect the roadbed by allowing greater amounts of water to flow more freely, preventing damaging roadbed saturation; and (3) increase the connectivity of waterways, which preserve natural drainage. Also shown in figure 12, is the Skykomish Basin wetland mitigation banklocated upstream of the Snohomish River Bridge. For this project, purchasing credits from the mitigation bank serves the dual purposes of (1) offsetting the loss of 15.6 acres of wetland and wetland buffer areas damaged during construction with compensatory flood storage and (2) reducing the erosive capacity of water on the bridge by slowing the flow of the Skykomish River. Facilities managed by the King County Wastewater Treatment Division are vulnerable to sea level rise, which may increase flooding of infrastructure and combined sewer overflows. To address this concern, the Wastewater Treatment Division made minor modifications to new construction and rehabilitation projects and plans to more formally incorporate climate change information into its asset management program. Based on a climate change vulnerability assessment of its system, engineers adjusted the design of two vulnerable facilities. First, engineers determined that raising the new Brightwater Flow Meter Vault and Sampling Facility’s equipment by 5 feet would address these assets’ vulnerabilities to projected sea level rise. Accordingly, these facilities were designed and built 5 feet higher. Second, at the Barton Pump Station, which was scheduled for rehabilitation, engineers raised an overflow weir and installed a flap gate, pictured in figure 13, to prevent saltwater intrusion. According to King County Wastewater Treatment Division officials, these adaptive actions were “low-risk, high-reward” measures, illustrating “no regrets” solutions that provide benefits regardless of future climate conditions. For example, the modifications made to the Barton Pump Station will help protect against current saltwater intrusion problems such as the event that tripped off a combined sewer overflow alarm in January 2010 during a particularly high tide. Milwaukee Metropolitan Sewerage District (Wisconsin) Milwaukee Metropolitan Sewerage District facilities are vulnerable to projected increases in frequency and intensity of extreme rainfall events due to climate change, potentially resulting in more frequent and larger combined sewer overflows. As part of broader efforts to meet growing demand for sewer capacity, Milwaukee Metropolitan Sewerage District officials employed what they called “green infrastructure” programs to make the district’s sewer system more resilient to climate change by capturing and holding or slowing the flow of stormwater, and officials plan to incorporate climate change adaptation into infrastructure planning and design where it makes sense as their facilities age and are replaced over time. Three of these programs, shown in figure 14, include (1) bio-swales, which are depressed catchment areas planted with vegetation to capture and infiltrate stormwater runoff; (2) green roofs either partially or completely planted with vegetation to hold rainwater; and (3) the purchase of undeveloped property to preserve targeted land areas to store and drain stormwater runoff into the ground naturally. Milwaukee Metropolitan Sewerage District officials emphasized the co- benefits of green infrastructure programs, including flood management, improved air and water quality, increased property values, reduction of urban heat island effect, and additional recreational amenities. NASA Johnson Space Center (Houston, TX) and NASA Langley Research Center (Hampton, VA) Storm surge and relative sea level rise pose significant climate threats to Johnson Space Center and Langley Research Center. As previously discussed, these centers hosted adaptation workshops to identify risks to assets and capabilities from current and future changes in the climate. We attended these workshops and observed that they involved a broad range of stakeholders—including NASA climate scientists, headquarters officials, and center staff; local government and industry officials; and experts from local academic institutions—in a comprehensive evaluation of center vulnerability. The workshops are organized to help each center (1) obtain information on historic, current, and projected climate hazards specific to the region; (2) characterize the risk of current and future climate on center systems, assets, and capabilities; (3) start to build capacity to execute a continuous adaptation process; and (4) begin to plan for the future and integrate climate considerations into existing management plans and processes. These workshops were held in late 2011 and 2012, so it is too early to fully evaluate the progress of NASA centers in incorporating climate change into their planning processes. NASA officials have begun to conduct follow-up activities and analyze lessons learned from the workshops. An important outcome of the workshops has been increasing NASA collaboration and partnership with surrounding communities, federal neighbors, and academia, according to NASA officials. Additionally, some centers are supporting local tidal marsh restoration projects or implementing their own protective measures of vulnerable mission-critical areas. Low impact development has been implemented as one way of slowing water runoff and allowing more infiltration. For example, the Langley Research Center has identified high- priority areas for wetland development to act as buffer zones for future storm surge events, and it planned to harden or elevate vulnerable infrastructure elements (heating, ventilation, and air-conditioning, as well as electrical transformers) as it rehabilitates, repairs, and maintains its assets over time. The adaptive measures described above did not necessarily require decision makers to undertake major changes to project plans or infrastructure systems but often did involve a commitment of financial resources and, importantly, a change in mind-set toward addressing longer-term and uncertain risks that many decision makers are not yet in a position to consider. Key factors that enabled these decision makers to undertake such measures and overcome the challenges that have deterred others from integrating climate change into infrastructure planning were that (1) their local circumstances were conducive to addressing climate-related risks, (2) they learned to use available climate information, (3) they had access to local assistance, or (4) they considered climate impacts within existing planning processes in the same context as other potential risks. At the sites we visited, local circumstances were conducive to addressing climate-related risks because these sites: (1) were in regions that recently experienced a natural disaster or that had discernible climate-related impacts, providing a stronger basis for engaging in adaptation efforts; (2) had strong community leadership to help spur action; and (3) had executive orders or other formal policy documents to help justify and encourage taking adaptive actions. Recent extreme weather events triggered a response. In some cases, decision makers were compelled to account for future climate conditions by a triggering event that demanded a response or created a policy window for action. For example, Hurricane Katrina exacted a heavy toll on the old Twin Span Bridge, necessitating a rebuild and prioritizing the construction of a new, more resilient bridge. As noted in the 2009 NRC report on climate-related decision support, recent firsthand experience with a natural disaster, such as a heat wave, drought, storm, or flood, can dramatically increase decision makers’ desire for, and openness to, new information and action. For example, according to stakeholders from the American Association of State Highway and Transportation Officials (AASHTO), the sense of urgency of climate change adaptation is generally higher in coastal states and in areas that have experienced recent events affecting their transportation infrastructure. Similarly, EPA officials told us that the likelihood that a wastewater utility would consider climate change in infrastructure planning depends largely on, among other things, where it was located geographically and, in some cases, whether it had already experienced a weather event that might increase with a changing climate. This point was evident during our visit to Milwaukee, where extreme rainfall events in 2008, 2009, and 2010 each exceeded the magnitude of a 100-year storm, making the public aware of the need to prepare for the impacts of climate change. Also, according to NASA officials, the impact of extreme events on the two NASA centers we visited helped drive the creation of the adaptation workshops. NRC, America’s Climate Choices: Panel on Adapting to the Impacts of Climate Change, Adapting to the Impacts of Climate Change (Washington, D.C.: 2010). Director outlining how they will start to incorporate consideration of climate risks and adaptation strategies into their plans and processes. Policy documents helped justify action. As shown by our site visits, executive orders or other formal policy documents can help justify and encourage adaptive efforts at the state and federal levels. For example, Washington State Executive Order 07-02, issued in 2007, directed the development of a climate change initiative to determine the specific steps that should be taken to prepare for the impact of global warming on infrastructure, among other things. Since then, state transportation officials considered climate change adaptation during the environmental review of Washington State Route 522, and the Washington State Department of Transportation has directed all project teams to consider climate change in their national and state environmental review documents. Similarly, King County’s 2007 Climate Action Plan provided the impetus to move forward on adaptation activities, according to Wastewater Treatment Division officials. At the federal level, the October 5, 2009, Executive Order 13514 on Federal Leadership in Environmental, Energy, and Economic Performance directs federal agencies to evaluate their climate change risks and vulnerabilities and manage the effects of climate change on the agency’s operations and mission in both the short- and long-term. NASA officials at the Johnson Space Center and Langley Research Center workshops cited the executive order as a reason to take the workshops seriously. The examples from our site visits show that it is possible to use many types of climate-related data to make more informed decisions about climate change in project-level infrastructure planning. Importantly, the decision makers at the sites we visited did not wait for perfect information to take action, and they learned to manage the uncertainty associated with climate-related data. As stated to us by an official from Seattle Public Utilities, “uncertainty should not be an excuse for inaction on climate change adaptation. Decision makers have to get smarter and find ways to incorporate whatever climate information they have.”challenges that decision makers reported in identifying and applying available information about climate change, decision makers at the sites we visited learned to use a range of information sources, including (1) site-specific projections of future climate conditions, (2) qualitative information based on state or regional scale climate projections, and (3) observed climate data. Site-specific projections of future climate conditions. In some cases, decision makers learned to use site-specific projections of future climate conditions when determining how to take adaptive measures. For example, NASA climate scientists prepared downscaled climate variable projections for the Johnson Space Center and Langley Research Center workshops. Table 5 shows projected quantitative climate changes for Johnson Space Center. Furthermore, Milwaukee Metropolitan Sewerage District officials used site-specific climate change projections prepared by the Wisconsin Initiative on Climate Change Impacts as a foundation for planning green infrastructure components that may have a beneficial impact to their system. More specifically, the Milwaukee Metropolitan Sewerage District contracted with researchers at a local academic institution to use these projections to provide an analysis of how climate change could impact the sewer system and cause sewer overflows. The King County Wastewater Treatment Division similarly used sea level rise projections from the University of Washington’s Climate Impacts Group in its facilities vulnerability study. Qualitative information. Not all decision makers have access to quantified site-specific projections of future climate changes. In the absence of such projections, some infrastructure decision makers from our site visits used qualitative evaluations of state or regional scale climate projections to help make more informed decisions. For example, site-specific climate projection data were not available when Washington State Department of Transportation officials evaluated adaptation measures for Washington State Route 522. For this reason, the project team conducted a qualitative evaluation of climate variability based on available information, such as information from the region’s transportation planning organization and studies reflecting how climate change impacts may manifest themselves within the region. Similarly, when Seattle Public Utilities officials assessed their adaptation options, site-specific climate change projection data were not adequate to be useful for planning purposes. As a result, according to a 2011 EPA report, utility officials used their general understanding of climate trends to apply a safety factor to new infrastructure construction so that that new investments would more likely perform their intended function over their useful lives. This is a practical approach that can be generalized to a wide range of adaptation situations, according to technical comments from CEQ, OSTP, and USGCRP. Observed historical climate data. According to a NOAA workshop report on climate adaptation, observed climate records help to overcome barriers that may be associated with discussions of climate change. Milwaukee Metropolitan Sewerage District officials told us they emphasize data on observed changes when the public inquires about the district’s climate change adaptation actions. Similarly, officials from the Wisconsin Initiative on Climate Change Impacts stated that while it is difficult to ask a planning board for money to make design changes based on uncertain projections, observations can show that the climate is changing and that stakeholders are often more compelled by historical data than by model projections. As we observed, NASA kicked off each workshop by presenting observed climate data for the local area and discussing participants’ personal experiences with weather events to make the potentially abstract notion of climate vulnerability “real.” Figure 15 shows the observed historical sea level and temperature data that NASA used in its Langley Research Center workshop. Some decision makers at the sites we visited said that they used historic climate data to inform engineering decisions. For example, when designing the new Twin Span Bridge, Louisiana Department of Transportation and Development engineers wanted to design the bridge to resist storm surge and wave action from the worst-case storm scenario. However, they had no detailed information about Lake Pontchartrain’s wave characteristics or guidance from the AASHTO on how to design a bridge to withstand extreme weather events in coastal areas. To obtain this information, these officials hired experts in the area of wave mechanics to conduct a storm analysis. The experts used historic storm surge data to develop hypothetical scenarios regarding wave crest elevations and hurricane tracks. While reviewing historic data, the experts discovered that Lake Pontchartrain is very susceptible to storm surge. To determine the worst-case scenario for the Twin Span Bridge, they modeled a storm with properties similar to Hurricane Katrina along different storm tracks. The storm surge and waves created by a Katrina- like hurricane located west of the bridge became the basis of their design. Access to local assistance was instrumental to decision makers’ ability to undertake climate adaptation efforts at the sites we visited. Decision makers used this assistance to (1) translate available climate information into a meaningful and usable form and (2) help communicate to the local community the risks associated with climate change and the importance of taking action. Translating available information. At most of our site visits, local experts helped decision makers bridge the gap between the information they needed and the science that was available. Decision makers at the sites we visited told us that local experts were instrumental because they understood the local context. In one example, the Milwaukee Metropolitan Sewerage District sought the expertise of local scientists and planners who were familiar with its sewer system and local considerations because available climate change information could not be used “off the shelf” for wastewater planning. These experts translated region-specific climate model data into a form that could be plugged into existing sewer system models used by the Milwaukee Metropolitan Sewerage District for system planning and evaluation. This enabled the district’s decision makers to understand the projected impacts of climate change on its sewer system and appropriately tailor their adaptation efforts. In another example, NASA developed a Climate Adaptation Science Investigator working group with members at each of its centers to partner NASA climate scientists with local infrastructure managers, thereby developing local expertise that decision makers could use to tailor center-specific adaptation solutions. Communicating to the public. In addition to helping translate climate change information, decision makers at our site visits noted the importance of having local experts to help communicate local climate change information to the public and help the community understand the need for adaptation. For example, several decision makers in King County said that experts from the Climate Impacts Group at the University of Washington, through outreach programs, were effective in focusing the community’s attention on climate change issues and the importance of investing in climate preparedness. According to one of the decision makers, when King County officials are “able to stand shoulder-to- shoulder” with local scientists known in the community, they do not have to defend the underlying climate science to customers who could potentially face increased rates. Similarly, several decision makers in Milwaukee noted that having local experts helps the agency more effectively convey to the community the need for and importance of climate preparation. They noted, “the response you get from people when talking about climate change often depends on who is delivering the message.” Some decision makers stated during our site visits that a key factor in their success was an ability to consider potential climate change impacts within their existing infrastructure planning processes so that they were viewed in the same context with other potential risks. As NRC reported in 2010, incorporating adaptation considerations into existing processes—a concept known as “mainstreaming”—can reduce costs and provide incentives to adapt. The value of mainstreaming adaptation into normal planning processes was illustrated by several of our site visits. In Milwaukee, for example, sewerage district officials noted that efforts to consider climate change in sewer infrastructure planning were successful because climate change information could be integrated into existing planning processes and analyses. In one such effort, an engineer at the Milwaukee Metropolitan Sewerage District told us that the agency builds new water conveyance structures taller because it “makes sense” given the known vulnerabilities to increased flooding in the region. Additionally, in the Washington State Route 522 example, project planners incorporated climate change considerations during the project’s environmental review process, which provided the opportunity to explain how the elements of the project helped to improve climate resiliency and reduce potential for damage from extreme storm events. According to Washington State Department of Transportation officials, climate adaptation measures were integrated with decisions about how to minimize environmental effects and comply with regulations, permits, and approvals. Some of the decision makers from our site visits envision more formally integrating potential climate change impacts into planning processes. For example, wastewater officials from King County said that they will likely include climate change risk in a field of the county’s asset management database that is maintained to track the status and condition of infrastructure components. Therefore, when a particular component is due for rehabilitation or replacement, information will be readily available for planners and designers to make the component more resilient to climate change as it is being modified anyway. Similarly, NASA’s Climate Change Adaptation Policy Statement notes that the agency plans to start building the capacity to execute a continuous adaptation process and will require that climate considerations be incorporated into existing management plans and processes. According to NASA officials, such plans and processes include master planning efforts, construction of facilities projects, environmental management systems, and permitting. Emerging federal efforts are under way to facilitate and enable more informed decisions about adaptation, including raising public awareness, but these efforts could better meet the needs of local decision makers, according to studies, decision makers from our site visit locations, and other stakeholders. In some cases, these sources identified opportunities to better meet the needs of local infrastructure decision makers in the future by: (1) improving infrastructure decision makers’ access to and use of available climate-related information, (2) providing increased access to local assistance, and (3) considering climate change in existing planning processes. Emerging federal efforts to raise public awareness of climate change adaptation include (1) the Interagency Climate Change Adaptation Task Force, (2) the National Climate Assessment status report on climate change science and impacts, and (3) vulnerability assessments for specific infrastructure categories. Executive Order 13514 on Federal Leadership in Environmental, Energy, and Economic Performance called for federal agencies to participate actively in the already existing Interagency Climate Change Adaptation Task Force. The task force, which began meeting in Spring 2009, is cochaired by CEQ, NOAA, and OSTP and includes representatives from more than 20 federal agencies and executive branch offices. The task force was formed to develop federal recommendations for adapting to climate change impacts both domestically and internationally and to recommend key components to include in a national strategy. On October 14, 2010, the task force released its interagency report outlining recommendations to the President for how federal policies and programs can better prepare the United States to respond to the impacts of climate change. The report recommended that the federal government implement actions to expand and strengthen the nation’s capacity to better understand, prepare for, and respond to climate change. The 2010 report laid out guiding principles for adaptation for federal agencies (and that should be considered by others) and policy goals and recommended actions for the federal government. These recommended actions include making adaptation a standard part of agency planning to ensure that resources are invested wisely and services and operations remain effective in a changing climate. On October 28, 2011, the task force released Federal Actions for a Climate Resilient Nation: Progress Report of the Interagency Climate Change Adaptation Task Force, which outlined federal progress in expanding and strengthening the nation’s capacity to better understand, prepare for, and respond to extreme events and other climate change impacts. The report provides an update on actions in key areas of federal adaptation, including building resilience in local communities and providing accessible climate information and tools to help decision makers manage climate risks. According to the task force, its work has increased awareness of climate change across the federal government and generated adaptive actions. In technical comments, CEQ, OSTP, and USGCRP noted that the task force recommended that each agency “mainstream” adaptation planning into its missions, operations, and facilities so as to ensure that climate change impacts are taken into consideration with long-term planning and reforming building standards. The task force also stated that, as the federal government further integrates adaptation into its operations, policies, and programs, it will catalyze additional adaptation planning across the nation. However, the 2012 NRC report on climate models describes the task force as having largely been confined to convening representatives of relevant agencies and programs for dialogue, without mechanisms for making or enforcing important decisions and priorities. In technical comments, CEQ, OSTP, and USGCRP took issue with NRC’s description of task force activities, citing the release of agency adaptation plans (discussed further below) and a variety of other strategic planning efforts, including the National Fish, Wildlife and Plants Climate Adaptation Strategy. The National Climate Assessment, required not less frequently than every 4 years by the Global Change Research Act of 1990 and conducted under the USGCRP, analyzes the effects of global change on the natural environment, agriculture, energy production and use, land and water resources, transportation, human health and welfare, human social systems, and biological diversity, and it analyzes current trends in global change, both human-induced and natural, and projects major trends for the subsequent 25 to 100 years. USGCRP intends that this assessment be used by U.S. citizens, communities, and businesses as they create plans for the nation’s future. According to USGCRP documents, these assessments serve an important function in providing the scientific underpinnings of informed policy and act as status reports about climate change science and impacts. They can identify advances in the underlying science, provide critical analysis of issues, and highlight key findings and key unknowns that can guide decision making. Assessments attempt to identify climate impacts at the regional level to raise awareness and spur more informed decision making. There have been two assessments in the past 20 years, and a draft of a third assessment report was released for public review on January 11, 2013. The first, in 2000, included a large stakeholder engagement process and the second, in 2009, was more focused on specific climate science topics. The third assessment—expected to be finalized in March 2014, according to USGCRP—differs in multiple ways from previous efforts, according to USGCRP’s strategic plan. Building on the recommendations of the NRC, it will both implement a long-term, consistent, and ongoing process for evaluation of climate risks and opportunities and inform decision making processes within regions and sectors. An essential component of this ongoing process is to establish a sustained assessment activity both inside and outside of the federal government that draws upon the work of stakeholders and scientists across the country. The third National Climate Assessment report will also have significant components related to transportation and water infrastructure, among other sectors, according to USGCRP. Some federal agencies are also conducting vulnerability assessments for specific infrastructure categories. For example, the Federal Highway Administration is developing a vulnerability and risk assessment model for transportation infrastructure. To test this effort, the Federal Highway Administration funded pilot studies in Washington State; the San Francisco Bay Area; Oahu, Hawaii; Hampton Roads, Virginia; and New Jersey. For these pilots, the Federal Highway Administration developed a risk assessment model to aid state departments of transportation and metropolitan planning organizations in inventorying assets, gathering climate information, and assessing the risk to their assets and the transportation system from climate change.end of 2010 and participating agencies completed their project reports in late 2011. According to agency officials, the Federal Highway Administration is initiating a second round of pilots, to be launched in early 2013, with an expanded focus on extreme weather events and adaptation options. The pilots started at the The Federal Highway Administration has reviewed these reports and used the feedback from the pilot agencies to refine the vulnerability and risk assessment framework, according to agency officials. Specifically, the Federal Highway Administration’s December 2012 Climate Change and Extreme Weather Vulnerability Assessment Framework draws from the experiences of these pilot projects to develop a guide for transportation agencies interested in assessing their vulnerability to climate change and extreme weather events. overview of key steps in conducting vulnerability assessments and uses examples to demonstrate a variety of ways to gather and process climate- related information. Federal Highway Administration officials also noted that the agency is currently soliciting proposals for additional pilot agencies to further evaluate the framework. For more information on the Federal Highway Administration’s December 2012 Climate Change and Extreme Weather Vulnerability Assessment Framework, click here. According to relevant studies, local decision makers from our site visits, and other stakeholders, future federal efforts to improve access and use of available climate-related information could better focus on the needs of local decision makers. These sources identified opportunities for these efforts to better meet the needs of local infrastructure decision makers in the future by (1) better coordinating and improving access to the best available climate-related data and (2) providing technical assistance to help local decision makers translate available climate-related data into information useful for decision making. Emerging federal efforts to coordinate and improve access to the best available climate-related data for decision making are much needed, according to studies, local decision makers from our site visits, and other stakeholders. According to a 2010 NRC study, the federal government has a critically important role in coordinating available climate-related data because it provides and supports large infrastructure for data collection and analysis (e.g., satellites, climate models, and monitoring systems), and can set standards for information quality. However, as noted by USGCRP in its April 2012 strategic plan, federal agencies generally have pursued a distributed data strategy over the last decade, in which individual agencies have established archives for collecting and storing data. This means that decisions and actions related to climate change are currently being informed by a loose confederation of networks and other institutions, according to the 2010 NRC study. A range of stakeholders cited the need to improve the coordination of agency climate data collection and consolidation efforts. For example, Milwaukee Metropolitan Sewerage District officials told us they believe the federal government could better focus its initiatives by integrating climate-related information programs under one umbrella. Echoing this sentiment, officials from the Wisconsin Initiative on Climate Change Impacts stated that the “federal agencies that provide climate change information need to find a way to coordinate their efforts. Currently, there is no coherence among such agencies.” In addition, in its December 2010 report, the EPA National Drinking Water Advisory Council noted that there is a pressing need for a coordinated, collaborative, information strategy that is supported by the key agencies and organizations and that helps make the most effective use of limited financial and technical resources available to address climate change challenges. USGCRP agencies have been providing global change information that is essential to many aspects of policy, planning, and decision making. The growing demands for information by decision makers, however, are highlighting the need for improved accessibility to more comprehensive, consolidated, and user- relevant global change-related data and information. Global change observations, monitoring, modeling, predictions, and projections—underpinned by the best-available natural and social science—can provide the framework of global change information. No single agency can provide the breadth of information needed. This provides a unique opportunity for current and potential USGCRP partners, including the private sector, academia, and other Federal agencies, to improve the effectiveness of its global change information in ways that better address the growing public demand for science that can inform decision making without prescribing outcomes. USGCRP has established an adaptation science workgroup focused on coordinating interdisciplinary science in support of national and regional adaptation decisions, among other activities, and is working with CEQ, OSTP, and other agencies to improve coordination of the development and delivery of climate science and services to local decision makers, according to USGCRP officials. In our 2011 report on climate change funding, OSTP stated that, while significant progress is being made in linking the climate science-related efforts, individual agencies still want to advance initiatives that promote or serve their agency missions. This, according to OSTP, yields a broader challenge of tying climate-related efforts together into a coherent governmentwide strategy since interagency coordinating programs like USGCRP generally do not have direct control over agency budgets. According to a 2009 NRC report, the absence of centralized budget authority limits the ability of the USGCRP to influence the priorities of participating agencies or implement new research directions that fall outside or across agency missions. In technical comments, CEQ, OSTP, and USGCRP noted that the absence of centralized budget authority remains the most important impediment to USGCRP’s ability to meet its mandate to provide the information needed to support adaptation planning and implementation. However, according to the technical comments, agencies’ enabling legislation and subsequent reauthorizations generally require that they advance initiatives that promote or serve their agency missions, and the appropriations process supports and reinforces separate budget authorities, particularly where agencies are covered by different Congressional committees. The technical comments also noted the difficulty in finding mechanisms to facilitate joint federal funding of projects makes collaboration and implementation of joint priorities more challenging. While coordinating available climate-related data is a first step in making more informed adaptation decisions, another key step is to ensure decision makers have access to the best available data. According to a 2010 NRC study, an informed and effective national response to climate change requires that the widest possible range of decisions makers— public and private, national and local—have access to up-to-date and reliable information about current and future climate change, the impacts of such changes, the vulnerability to these changes, and the response strategies for reducing emissions and implementing adaptation. As stated by AASHTO officials, the most important role that the federal government could play in the transportation sector with respect to adaptation would be to provide a central repository for state transportation officials to go to for data. Similarly, stakeholders at a recent NOAA- sponsored workshop on transportation infrastructure adaptation highlighted the importance of clear guidance on where to look for information, including the need for a central clearinghouse for climate and weather information relevant to transportation officials. Efforts to provide infrastructure decision makers with access to climate- related information are an emerging priority across the federal government. For example, on June 6, 2012, both the Acting Director of OMB and the Director of OSTP signed the Science and Technology Priorities for the Fiscal Year 2014 Budget memorandum, which states that agencies should give priority to research and development that strengthens the scientific basis for decision making. Such research and development is to include efforts to enhance the accessibility and usefulness of data and tools for decision support, specifically efforts that advance the implementation of federal adaptation initiatives. USGCRP’s April 2012 strategic plan recognizes this high-level priority by identifying enhanced information management and sharing as a key objective. In this regard, USGCRP is pursuing the development of a Global Change Information System to support coordinated use and application of federal climate science. USGCRP plans to leverage existing tools, services, and portals from the USGCRP agencies to develop a “one-stop shop” for accessing global change data and information, according to the strategic plan. These efforts, if fully implemented, appear likely to improve access to the broad range of available climate-related information. However, it remains unclear how federal efforts will address the challenge of clearly identifying the best available information to use in local infrastructure planning so decision makers who may not be familiar with climate science are not left to sort it out themselves. Several site visit decision makers, infrastructure stakeholders, and available studies noted additional infrastructure adaptation information needs that could be met through future federal research. Better organized and accessible climate data may meet some of these needs, but a “one- stop-shop” may also highlight gaps in existing data. In other words, access to existing information may not be enough to meet all the perceived needs of infrastructure decision makers because some types of desired information do not yet exist. According to OMB’s and OSTP’s fiscal year 2014 science and technology priorities memo, specific areas where progress is needed include: observations to detect trends in weather extremes; integration of observation into models; simulation and prediction at spatial and temporal scales conducive to decision making; and adaptation responses to changing frequency and intensity of extreme weather events. Regardless, improved coordination and consolidation of federal climate data will assist in the prioritization of future federal adaptation science activities and help local and federal officials clarify true “needs” from “wants.” Even with coordinated and accessible climate data, local decision makers will need technical assistance and tools to interpret what the data mean for infrastructure planning, according to our 2009 report on climate change adaptation. Center and Langley Research Center, NASA developed handouts that present facility-relevant climate change information in a user-friendly format to help decision makers at NASA centers understand what to expect in the future, so they can plan accordingly. To help nonscientists use the handout, it provides information on how to interpret local climate projections, identify specific potential impacts from climate change, and lays out the key adaptation considerations for local decision makers. GAO-10-113. stormwater infrastructure. The resources and tools developed under the Climate Ready Water Utilities initiative are designed for decision makers with different levels of adaptation experience, according to EPA officials. Decision makers with little experience can learn about adaptation options using EPA’s Adaptation Strategies Guide for Water Utilities, while more advanced decision makers can use EPA planning tools to conduct a workshop or use EPA’s Climate Resilience Evaluation and Awareness Tool, a risk assessment software tool that uses climate information from USGCRP’s 2009 National Climate Assessment to enable utilities to evaluate a range of climate change scenarios from 2010 through 2090. This tool allows decision makers to analyze how various adaptation strategies may help reduce climate risks, enabling them to prioritize the implementation of adaptation measures. In the future, according to EPA officials, the Climate Ready Water Utilities initiative will focus on developing tools for smaller utilities that have limited resources to engage technical experts for assistance. According to EPA officials, the agency has other projects under way focused on providing additional information and alternative approaches for communities. These projects include work on a decision-making framework to help decision makers select among different adaptation approaches, development of case studies to promote peer-to-peer learning on preparing for impacts, and development of a tool for users to evaluate options in a range of potential future water quality scenarios. The Department of Transportation supports a range of technical assistance efforts focused on helping road and bridge infrastructure decision makers incorporate climate change information into planning processes. First, the department maintains the Transportation and Climate Change Clearinghouse, which provides access to existing literature on climate change adaptation and transportation issues, but less in the way of detailed site-specific information that decision makers need for infrastructure planning.Highway Administration, completed Phase 1 of the Gulf Coast Study in March 2008, which analyzed how changes in climate could affect Second, the department, through its Federal transportation systems in the gulf coast region over the next 50 to 100 years. A second phase of the Gulf Coast Study, scheduled to be completed in 2013 according to the Federal Highway Administration, is focusing on the Mobile, Alabama, region and will build on the information developed in Phase 1. The Phase 2 study inventoried critical infrastructure, assembled climate data and projections for the region, and will assess the vulnerability of the critical infrastructure across modes. The study will also develop transferrable tools and approaches that decision makers can use to determine which transportation systems most need to be protected and to identify and choose suitable adaptation options. The technical assistance and tools provided by EPA and the Department of Transportation hold promise as ways to help decision makers obtain the best available climate-related information for infrastructure planning. However, officials from EPA and the Department of Transportation said that they do not know the extent to which decision makers are using the tools they developed. EPA officials told us they were not sure about the extent to which utilities have used the agency’s Climate Resilience Evaluation and Awareness Tool, and can only estimate the number of users by the times it has been downloaded and the number of participants in pilot programs and educational webinars. EPA officials told us that the agency plans to conduct additional outreach to decision makers. Likewise, according to Federal Highway Administration officials, the extent to which states and metropolitan planning organizations have used some of the agency’s climate adaptation resources remains unclear. The officials said that the states and metropolitan planning organizations participating in pilot programs have used the agency’s draft adaptation framework. In addition, federal officials track and collect feedback from the state and local agencies that have participated in the workshops and peer exchanges that the Federal Highway Administration has sponsored, according to agency officials. Importantly, a 2010 NRC report on informing decisions in a changing climate found it difficult to identify good reviews and clear unbiased discussions of the full range of decision support tools, their appropriate uses and limitations, and concluded that there could be a stronger role for the federal government to provide guidance on tools to support climate decisions, perhaps through a climate tools database, network, and best practice examples. At the locations we visited, having access to local assistance was a key variable that enabled decision makers to incorporate climate change into project level infrastructure planning. The entities coordinating federal adaptation efforts are beginning to reflect in strategic planning the need to develop and provide access to local expertise capable of bridging the gap between decision makers and scientists. For example, USGCRP’s April 2012 strategic plan recognizes the need to improve the federal government’s ability to translate climate information into what is needed by decision makers, and adaptation task force reports state that the federal government should enhance its capacity to translate information between scientists and decision makers. The National Climate Assessment also provides an opportunity to engage with stakeholders and partners and is being structured to provide a continuing mechanism for engaging communities and networks of stakeholders at the local, state, tribal, and regional levels. NRC, Panel on Strategies and Methods for Climate-Related Decision Support, Committee on the Human Dimensions of Global Change, Informing Decisions in a Changing Climate (Washington, D.C.: 2009). and public policy. RISA teams help build the nation’s capacity to adapt to climate variability and change by providing information to local decision makers. For example, Seattle Public Utilities and King County recognized NOAA’s local RISA program—the University of Washington Climate Impacts Group—as instrumental in helping to elevate the issue of climate change in the central Puget Sound region and Washington State. As noted by CEQ, there are other examples of science-to-user continuums from which to learn, including U.S. Department of Agriculture Cooperative Extension and NOAA Sea Grant Extension, which provide extension agents of all specializations with training in understanding and communicating climate change information to support adaptation. needs to be done by qualified people to ensure that users receive the most accurate and appropriate information. The people currently doing this work come from a diversity of backgrounds such as weather modeling, engineering, statistics and environmental science. Currently, no standards exist for helping potential employers assess whether such people have the necessary skills in the appropriate use of climate model information to ensure that they can provide the most accurate and appropriate information to end users. This suggests an unmet need for training and accreditation programs in this area. Accordingly, in the report, NRC recommended the development of a national education and accreditation program for “climate model interpreters” who can take technical findings and output from climate models, including quantified uncertainties, and use them in a diverse range of private- and public-sector applications. It is not clear what role the federal government could or should play in the development of such a program. Whatever the federal role in the future of climate data translation, research and experience show that such activities are more effective when well-established organizations build trust among information users over time, and that, in many instances, formal institutionalization will be The critical to longevity, recognition, and success, according to NRC.Interagency Climate Change Adaptation Task Force recognizes this need and stated, in its 2010 progress report, that to effectively integrate and implement adaptation responses, the federal government should recruit, develop, and retain technically capable staff that have the proper expertise to understand decision maker needs, and to communicate effectively the range of possible climate change impacts. USGCRP is also aware of this issue, noting in its April 2012 strategic plan, that USGCRP agencies will use their relationships with academia to promote the interdisciplinary education at undergraduate and graduate levels needed for a professional and technical workforce in areas related to climate change. These federal goals were developed too recently to evaluate, but it is unclear how developing a highly qualified workforce of climate interpreters without a corresponding institutional home would help infrastructure decision makers understand who they can contact for assistance. Notwithstanding the limited federal role in planning for transportation and wastewater infrastructure, several emerging federal adaptation efforts could help local infrastructure decision makers consider climate change in existing processes, according to studies, local site visit decision makers, and other stakeholders. These efforts relate to (1) design standards specifying how to consider climate change in infrastructure projects; (2) guidance specifying how certain types of federal infrastructure investments should account for climate change when meeting the requirements of the National Environmental Policy Act of 1969 (NEPA): and (3) agency adaptation plans describing, among other things, how climate change will be considered in federal planning processes that influence local actions. Professional associations like AASHTO—not federal agencies—generally develop the design standards that specify how weather and climate- related data are to be considered in project-level design and planning processes for roads and bridges, wastewater management systems, and NASA centers. OMB Circular A-119 directs agencies to use these voluntary consensus standards in lieu of government-unique standards except where inconsistent with law or otherwise impractical. According to Federal Highway Administration officials, for highway infrastructure these design standards are in turn modified and adopted by state governments and then approved by the federal government agency, in this case the Federal Highway Administration, before they can be applied to federally funded projects. Thus, federal agencies rely on professional associations to provide initial input to determine how and when climate- related data are included within design standards that specify how infrastructure is to be built. Decision makers from the sites we visited, other infrastructure stakeholders, and relevant studies emphasized the importance of better employing design standards as a tool for incorporating climate change in infrastructure planning. For example, experts from the University of Washington who work with the King County Wastewater Treatment Division stated that it would be helpful to have (1) protocols for developing and maintaining design standards that incorporate climate change projections and (2) established methods for using this information in actual design processes via well-documented case studies; because, according to these experts, not having a formal process for incorporating climate change information in design standards effectively ensures that most of the design community cannot act without unacceptable professional risks. Similarly, officials from the American Society of Civil Engineers with whom we spoke acknowledged that incorporating climate science in design standards is critical for translating adaptation into engineering practice. Building on this point, a recent report on adaptation policy noted that updating design standards can also spur innovation in materials science, engineering, and construction. Professional associations are beginning to take interest in climate change adaptation. For example, AASHTO maintains a web-based Transportation and Climate Change Resource Center with a climate adaptation page and a list of educational webinars on topics such as adapting infrastructure to extreme events. Society of Civil Engineers developed a Committee on Adaptation to Climate Change to, in part, translate climate science into engineering practice. In addition, some private infrastructure development and construction companies are beginning to develop methods to compare the costs and benefits of engineering alternatives considering different climate futures. These efforts are just under way, with as yet undetermined outcomes, but, according to a TRB-commissioned study, updating standards is a long process, involving many government and Also, in 2011, the American nongovernmental standard‐setting organizations. As a result, there have been calls for a more active federal role in encouraging professional associations to consider climate change in design standards. In 2010, NRC identified as a national priority the revision of engineering standards to reflect current and anticipated future climate changes, and it recommended that their use be required as a condition for federal investments in infrastructure. For more information about AASHTO’s Transportation and Climate Change Resource Center, see http://climatechange.transportation.org/. far as the NRC recommendation, recent transportation legislation recognized the significance of design standards. Section 33009 in the Senate version of the Moving Ahead for Progress in the 21st Century Act would have required the Secretary of Transportation, in consultation with others, to issue guidance and establish design standards for transportation infrastructure to help states and other entities plan for natural disasters and a greater frequency of extreme weather events in the process of planning, siting, designing, and developing transportation infrastructure by assessing vulnerabilities to a changing climate and the costs and benefits of adaptation measures. Section 33009 was not, however, in the version of the bill the conference committee agreed to, which ultimately passed both Houses of Congress and was signed into law on July 6, 2012. Certain types of federal infrastructure investments need to meet the requirements of NEPA, which requires federal agencies to evaluate the environmental impacts of their proposed actions and reasonable alternatives to those actions. Usually federal agencies evaluate the likely environmental effects of major federal actions using an environmental assessment, or, if the action likely would significantly affect the environment, a more detailed environmental impact statement. On February 18, 2010, CEQ—the entity within the Executive Office of the President that oversees implementation of NEPA—issued draft guidance on how federal agencies can consider the effects of climate change in the NEPA process. As CEQ noted in this guidance, the environmental analysis and documents produced in the NEPA process could consider the relationship of climate change effects to a proposed action, such as an infrastructure project that was a major federal action, or alternatives, including proposal design and adaptation measures. CEQ’s draft NEPA guidance states that climate change effects should be considered in the analysis of projects that are designed for long-term utility and located in areas that are considered vulnerable to specific effects of climate change (e.g., increasing sea level or ecological change) within the project’s time frame. For example, a proposal for long-term development of transportation infrastructure on a coastal barrier island will likely need to consider whether environmental effects or design parameters may be changed by the projected increase in the rate of sea level rise. Given the length of time involved in present sea level projections, such considerations typically would not be relevant to an action with only short-term considerations. The guidance further states that this is not intended as a new component of NEPA analysis but rather as a potentially important factor to be considered within the existing NEPA framework. The draft guidance also noted that, after consideration of public comment, CEQ intended to expeditiously issue the guidance in final form. CEQ received public comments on the draft guidance following its release on February 18, 2010. CEQ has not finalized the guidance or issued regulations addressing how, if at all, federal agencies are to consider the effects of climate change in the NEPA process. When asked for an estimate on when the final guidance would be available, CEQ, in December 2012, stated that “we are continuing to assess the best approach moving forward as we work on developing the guidance,” but did not indicate when the guidance would be finalized. Without finalized guidance from CEQ, it is unclear how, if at all, agencies are to consistently consider climate change in the NEPA process, creating the potential for inconsistent consideration of the effects of climate change in the NEPA process across the federal government. As directed by CEQ instructions and guidance implementing Executive Order 13514, agency adaptation plans for fiscal year 2013 were submitted to CEQ in June 2012 as part of executive branch agencies’ annual Strategic Sustainability Performance Plans. According to CEQ, the adaptation plans are to outline the agency’s policy framework, analysis of climate change risks and opportunities, process for agency adaptation planning and evaluation, programmatic activities, and actions taken to better understand and address the vulnerabilities posed by a changing climate. Agencies are to consider how they will include climate change within their existing programs and planning processes, some of which can influence state and local actions on infrastructure investment. For example, on September 24 2012, the Federal Highway Administration’s Associate Administrators for Infrastructure; Planning, Environment, and Realty; and Federal Lands Highway issued a memorandum to Federal Highway Administration staff clarifying the eligibility of adaptation activities for federal highway funding. The memo notes that Federal Highway Administration offices may allow state and local agencies to use highway funds to consider the potential impacts of climate change and extreme weather events and apply adaptation strategies, both at the project and systems levels. The extent to which agency adaptation plans will address policy specifics such as the Federal Highway Administration guidance is unclear because draft plans were released on February 7, 2013, and are undergoing public review and comment. Physical infrastructure such as roads, bridges, wastewater management systems, and NASA centers are typically expensive and long-term federally funded investments. Many are projected to be impacted by changes in the climate that, according to best available science, are inevitable in coming decades. As the nation makes these investments, it faces the choice of paying more now to account for the risk of climate change, or potentially paying a much larger premium later to repair, modify, or replace infrastructure ill-suited for future conditions. The choice raises a basic risk management question that an increasing number of state and local decision makers are beginning to address, particularly in the aftermath of Superstorm Sandy. Planning for transportation and wastewater infrastructure in this country remains largely within the domain of state and local governments, but emerging federal efforts are under way to facilitate and enable more informed decisions about adaptation. Moreover, entities coordinating federal adaptation efforts are beginning to reflect in strategic planning the need to develop and provide access to local assistance capable of bridging the gap between decision makers and scientists. Studies, local decision makers from site visits, and stakeholders suggest ways federal adaptation efforts could better serve the needs of local infrastructure decision makers. Specifically: Federal agencies and academic institutions collect a vast array of climate-related data, but local infrastructure decision makers face difficulty identifying, accessing, and using them, because as noted by a 2010 NRC study, this information exists in an uncoordinated confederation of networks and institutions. Of particular note, federal efforts to provide access to site-specific, climate-related information are an emerging priority, but it remains unclear how these efforts will address the challenge of identifying the best available information to use in infrastructure planning. According to the 2010 NRC report, the end result of this information not being easily accessible is that people may make decisions—or choose not to act—without it. At the locations we visited, access to local assistance was a key variable that enabled decision makers to translate available climate- related data into information useful for project level planning, but it is unclear how emerging federal efforts will help decision makers in other locations obtain similar assistance. Without clear sources of local assistance, infrastructure decision makers—who may not be familiar with climate science and who have many other responsibilities of immediate importance—will be left to sort it out themselves, and will face difficulty justifying investment in adaptation measures, the benefits of which may not be realized for several decades into the future. Notwithstanding the limited role federal agencies play in most project- level planning, certain types of federal infrastructure investments need to meet the requirements of NEPA. On February 18, 2010, CEQ issued draft guidance on how federal agencies can consider the effects of climate change in the NEPA process. However, CEQ has not finalized the guidance or issued regulations addressing how, if at all, federal agencies are to consider the effects of climate change in the NEPA process, and it also has not indicated when or if the guidance would be finalized. Without finalized guidance from CEQ, it is unclear how, if at all, agencies are to consistently consider climate change in the NEPA process. Professional associations generally develop and maintain design standards critical for translating adaptation into infrastructure engineering practice, not relevant federal agencies such as EPA (which has the lead for federally funded wastewater systems) or the U.S. Department of Transportation (which has the lead for federally funded roads and bridges). OMB Circular A-119 directs federal agencies to use voluntary consensus standards in lieu of government- unique standards except where inconsistent with law or otherwise impractical. Professional associations have started to investigate how to incorporate climate-related data into design standards, with as yet undetermined outcomes. Not having a formal process for incorporating climate change information in design standards effectively ensures that most of the infrastructure design community cannot act without unacceptable professional risks, according to certain local decision makers and stakeholders. As a result, there have been calls for a more active federal role in encouraging professional associations to consider climate change in design standards. To improve the resilience of the nation’s infrastructure to climate change, we are making the following four recommendations: that the Executive Director of the United States Global Change Research Program or other federal entity designated by the Executive Office of the President work with relevant agencies to identify for decision makers the “best available” climate-related information for infrastructure planning and update this information over time and clarify sources of local assistance for incorporating climate-related information and analysis into infrastructure planning, and communicate how such assistance will be provided over time; that the Chairman of the Council on Environmental Quality finalize guidance on how federal agencies can consider the effects of climate change in their evaluations of proposed federal actions under the National Environmental Policy Act; and that the Secretary of the U.S. Department of Transportation and the Administrator of the Environmental Protection Agency work with relevant professional associations to incorporate climate change information into design standards. We provided a draft of this report for review and comment to the Secretary of Transportation, the Administrator of EPA, the Chair of CEQ, the Director of OSTP, and the Executive Director of USGCRP. They did not provide official written comments but instead provided technical comments, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Transportation, the Administrator of EPA, the Chair of CEQ, the Director of OSTP, the Executive Director of USGCRP, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (212) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. This report (1) describes what is known about the impacts of climate change on the nation’s infrastructure, specifically roads and bridges, wastewater management systems, and National Aeronautics and Space Administration (NASA) centers; (2) analyzes the extent to which potential climate change impacts are incorporated into infrastructure planning; (3) identifies the factors that enabled certain decision makers to integrate climate change impacts into infrastructure planning; and (4) analyzes federal efforts to address the adaptation needs of local infrastructure decision makers and describes potential opportunities for improvement identified by studies, local decision makers who integrated climate change into infrastructure planning, and other stakeholders. We selected the road and bridge and wastewater management system infrastructure categories because they account for significant federal funding and are the focus of specific federal adaptation initiatives. We selected NASA centers because these facilities are large, they manage mission critical assets that are difficult, if not impossible to move or replace and, importantly, NASA has an emerging partnership effort focused on considering climate change information within the planning for its centers. NASA centers are also instructive examples because they incorporate roads, bridges, wastewater systems, and other infrastructure in one place as a system to support a mission. Before describing in detail the methods we used, it is important to recognize a few limits of our approach and report. First, it focuses on planning for new projects or significant rebuilds, and does not focus on operations and maintenance or wide-scale efforts to assess the vulnerability of the existing stock of infrastructure. Second, this report focuses on planning for specific projects, not long-range planning or strategic prioritization processes. Third, this report describes how decision makers incorporated climate change adaptation into infrastructure planning and implementation, but it does not generally assess the effectiveness of the adaptive actions themselves. The need for further research in this area is widely acknowledged but is not the focus of this report. To explain the potential consequences of climate change on the Nation’s infrastructure, we reviewed assessments from the National Research Council, the United States Global Change Research Program, and relevant federal agencies. We identified these assessments using government and National Academies websites and prior GAO reports on climate change. We then evaluated whether the assessments fit within the scope of work and contributed to the objectives of this report. For relevant assessments, we used in-house scientific expertise to analyze the soundness of the methodological approaches they utilized, and we determined them to be sufficiently sound for our purposes. Relevant assessments are cited throughout this document. To identify the extent to which climate change impacts are incorporated into infrastructure planning, we (1) reviewed laws, regulations, and planning guidance; (2) analyzed relevant reports on climate change adaptation; and (3) interviewed knowledgeable infrastructure stakeholders and officials from professional associations, federal agencies, and other organizations. To identify relevant reports on climate change adaptation, we conducted a literature search and review with the assistance of a technical librarian. We searched various databases, such as ProQuest, and focused on peer reviewed journals, government reports, trade and industry articles, and publications from associations, nonprofits, and think tanks from 2005 to present. We also searched for reports from the Congressional Research Service, the Congressional Budget Office, and agency inspectors general. To supplement this review we analyzed Internet-based adaptation report databases such as the Climate Adaptation Knowledge Exchange. Relevant reports are cited in footnotes throughout this report. To identify knowledgeable stakeholders, we reviewed our prior climate change work and relevant reports to identify individuals with specific knowledge of climate change adaptation and infrastructure. We interviewed professional association stakeholders from the American Association of State Highway and Transportation Officials, American Society of Civil Engineers, National Association of Clean Water Agencies, and the Water Utility Climate Alliance; federal agency officials from the Environmental Protection Agency and the Federal Highway Administration; and other stakeholders familiar with infrastructure adaptation, including the Georgetown Climate Center and the Center for Climate and Energy Solutions. We also coordinated with the Congressional Budget Office and the Congressional Research Service. To examine how climate change has been considered in infrastructure planning, we visited seven locations where decision makers had done so—three locations focused on roads and bridges (Washington State Route 522; Interstate-10 Twin Span Bridge near New Orleans, Louisiana; and Louisiana State Highway 1) , two locations focused on wastewater management systems category (King County Wastewater Treatment Division in Washington and the Milwaukee Metropolitan Sewerage District in Wisconsin), and two NASA centers (Langley Research Center in Hampton, Virginia, and Johnson Space Center in Houston, Texas). To select the transportation and wastewater sites, we reviewed studies; interviewed federal, state, and local agency officials; and analyzed Internet-based adaptation case study databases maintained by academic institutions such as the Georgetown Climate Center to identify examples where climate change was considered in infrastructure planning. From this review, we found a universe of about 20 total potential transportation and wastewater management system examples. Based on follow-up interviews and additional research, we narrowed the potential list for each category based on whether the candidates had considered climate change during both the project planning and implementation phases. We selected three projects focused on roads and bridges and two locations focused on wastewater management systems in an attempt to illustrate different potential climate impacts in different regions of the United States (the Pacific Northwest, Great Lakes, Gulf Coast, and Mid-Atlantic), but we were somewhat limited by the small set of potential site visits. NASA scheduled climate change adaptation workshops at two of its centers (Langley Research Center and Johnson Space Center) during the time frame of our work. We attended the workshops and collected information from a variety of federal and local stakeholders, including government officials and academic institutions. The sites we selected are not representative of all infrastructure adaptation efforts taking place; however, they include a variety of responses to climate change effects across different infrastructure categories. Findings from these site visits cannot be generalized to those we did not include in our nonprobability sample. We gathered information during and after the site visits through observation of adaptation efforts, interviews with officials and stakeholders, and a review of documents provided by these officials. As part of the site visits, we interviewed academic institutions that provided climate-related information to decision makers, including the Wisconsin Initiative on Climate Change Impacts, a collaboration between the University of Wisconsin–Madison’s Nelson Institute for Environmental Studies and the Wisconsin Department of Natural Resources; the Climate Impacts Group, an interdisciplinary research group at the University of Washington; and the Southern Climate Impacts Planning Program, a collaborative research program of the University of Oklahoma and Louisiana State University. We also followed up with officials after our visits to gather additional information. To analyze federal efforts to address the adaptation needs of state and local infrastructure decision makers and to describe opportunities for improvement, we (1) interviewed federal officials from the Council on Environmental Quality, Department of Transportation’s Federal Highway Administration, Environmental Protection Agency, and United States Global Change Research Program and (2) reviewed available studies on federal adaptation efforts. To monitor federal adaptation-related activities, we accessed materials stored in www.fedcenter.gov, the federal government’s home for comprehensive environmental stewardship and compliance assistance information. We also attended the Adaptation Futures International Conference on Climate Adaptation in May 2012 cohosted by the University of Arizona in Tucson, Arizona, and by the United Nation Environment Programme’s Programme of Research on Climate Change Vulnerability, Impacts and Adaptation, to learn about climate change adaptation research and approaches from around the world. We conducted this performance audit from October 2011 to April 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Steve Elstein (Assistant Director), Kendall Childers, Dr. Dick Frankel, Cindy Gilbert, Anne Hobson, Richard P. Johnson, Mary Koenen, Sara Lupson, Alison O’Neill, Dan Royer, Jeanette Soares, Ardith Spence, Kiki Theodoropoulos, and J.D. Thompson made key contributions to this report.
The federal government invests billions of dollars annually in infrastructure, such as roads and bridges, facing increasing risks from climate change. Adaptation--defined as adjustments to natural or human systems in response to actual or expected climate change-- can help manage these risks by making infrastructure more resilient. GAO was asked to examine issues related to infrastructure decision making and climate change. This report examines (1) the impacts of climate change on roads and bridges, wastewater systems, and NASA centers; (2) the extent to which climate change is incorporated into infrastructure planning; (3) factors that enabled some decision makers to implement adaptive measures; and (4) federal efforts to address local adaptation needs, as well as potential opportunities for improvement. GAO reviewed climate change assessments; analyzed relevant reports; interviewed stakeholders from professional associations and federal agencies; and visited infrastructure projects and interviewed local decision makers at seven sites where adaptive measures have been implemented. According to the National Research Council (NRC) and others, infrastructure such as roads and bridges, wastewater systems, and National Aeronautics and Space Administration (NASA) centers are vulnerable to changes in the climate. Changes in precipitation and sea levels, as well as increased intensity and frequency of extreme events, are projected by NRC and others to impact infrastructure in a variety of ways. When the climate changes, infrastructure-- typically designed to operate within past climate conditions--may not operate as well or for as long as planned, leading to economic, environmental, and social impacts. For example, the National Oceanic and Atmospheric Administration estimates that, within 15 years, segments of Louisiana State Highway 1-- providing the only road access to a port servicing 18 percent of the nation's oil supply--will be inundated by tides an average of 30 times annually due to relative sea level rise. Flooding of this road effectively closes the port. Decision makers have not systematically considered climate change in infrastructure planning for various reasons, according to representatives of professional associations and agency officials who work with these decision makers. For example, more immediate priorities--such as managing aging infrastructure--consume time and resources, limiting decision makers' ability to consider and implement climate adaptation measures. Difficulties in obtaining and using information needed to understand vulnerabilities and inform adaptation decisions pose additional challenges. Key factors enabled some local decision makers to integrate climate change into infrastructure planning. As illustrated by GAO's site visits and relevant studies, these factors included (1) having local circumstances such as weather-related crises that spurred action, (2) learning how to use available information, (3) having access to local expertise, and (4) considering climate impacts within existing planning processes. As one example, the Milwaukee Metropolitan Sewerage District managed risks associated with more frequent extreme rainfall events by enhancing its natural systems' ability to absorb runoff by, for instance, preserving wetlands. This effort simultaneously expanded the sewer system's capacity while providing other community and environmental benefits. District leaders enabled these changes by prioritizing adaptation, using available locallevel climate projections, and utilizing local experts for assistance. GAO's report identifies several emerging federal efforts under way to facilitate more informed adaptation decisions, but these efforts could better support the needs of local infrastructure decision makers in the future, according to studies, local decision makers at the sites GAO visited, and other stakeholders. For example, among its key efforts, the federal government plays a critical role in producing the information needed to facilitate more informed local infrastructure adaptation decisions. However, as noted by NRC studies, this information exists in an uncoordinated confederation of networks and institutions, and the end result of it not being easily accessible is that people may make decisions--or choose not to act--without it. Accordingly, a range of studies and local decision makers GAO interviewed cited the need for the federal government to improve local decision makers' access to the best available information to use in infrastructure planning. GAO recommends, among other things, that a federal entity designated by the Executive Office of the President (EOP) work with agencies to identify for local infrastructure decision makers the best available climaterelated information for planning, and also to update this information over time. Relevant EOP entities did not provide official comments, but instead provided technical comments, which GAO incorporated, as appropriate.
The nonprofit sector is diverse and has a significant presence in the U.S. economy. Of the estimated 1.7 million tax-exempt organizations in fiscal year 2008, about 69 percent were religious, charitable, and similar organizations, or private foundations and were referred to as 501(c)(3) organizations, and about 8 percent were social welfare organizations. Nonprofit organizations provide services in a wide variety of policy areas such as health care, education, and human services. As we have previously reported, the federal government is increasingly partnering with nonprofit organizations because nonprofit organizations can offer advantages in delivering services compared to government agencies—they are more flexible, they can act more quickly, and they often have pre-existing relationships with local officials and communities. Nonprofit organizations have provided a wide range of direct long-term assistance and recovery services to those affected by the Gulf Coast hurricanes including job training, counseling, and housing. Nonprofit organizations have contributed significant support—financial and non- financial—to post-Katrina and Rita recovery efforts. At the end of 2009, FEMA officials in Louisiana reported that more than $24 million in donated dollars, volunteer hours, and goods had been leveraged through long-term recovery groups to provide permanent housing and address other unmet needs. In addition, some nonprofit organizations provided technical and support services to those nonprofit organizations that rendered direct recovery services to Gulf Coast residents. For example, as of 2007, the Louisiana Family Recovery Corps (LFRC) had provided more than $20 million in programs, initiatives, and activities in the Greater New Orleans area since the storms. In its 2005-2008 retrospective, the Louisiana Disaster Recovery Foundation reported awarding grants totaling nearly $29 million to nonprofit organizations involved in Louisiana’s recovery process. Organizations such as the Mississippi Center for Nonprofits had a well- established communications infrastructure with hundreds of nonprofits within the state of Mississippi before the 2005 storms and used this network following the hurricanes to disseminate grant and technical information, provide vital resource referrals, and communicate available training workshops for nonprofit service providers. The Louisiana Association of Nonprofit Organizations (LANO) was similarly positioned in the state of Louisiana. According to its officials, LANO serves more than 1,000 nonprofit organizations throughout the state of Louisiana. One of LANO’s field offices is located in New Orleans in a building that it shares with approximately 30 other nonprofit organizations, many of whom are providing recovery assistance to residents of the surrounding neighborhoods which are among the poorest in the Metropolitan New Orleans area. Table 1 below provides examples of some of the nongovernmental partners that helped to build the capacity of direct service providers involved in the Gulf Coast recovery by providing human resources, guidance, training, funding, and technical assistance. One of the primary mechanisms the federal government uses to provide support to nonprofit organizations is federal grants. Federal grants are forms of financial assistance from the government to a recipient for a particular public purpose that is authorized by law. Federal grant funds flow to the nonprofit sector in various ways. For example, some grant funds are awarded directly to nonprofits, while others are first awarded to states, local governments, or other entities and then awarded to nonprofit service providers. Federal grant funding may also be awarded to nonprofit subgrantees through contracts. Federal laws, policies, regulations and guidance associated with federal grants apply regardless of how federal grant funding reaches the final recipients. Nonprofit organizations in Louisiana and Mississippi provided numerous human recovery services to Gulf Coast residents following Hurricanes Katrina and Rita, and several of those services, including housing, case management, and mental health services were supported either directly by the federal government or indirectly through other organizations receiving federal support. The federal government relied on both pre-existing as well as newly developed funding programs when supporting nonprofit organizations. For example the federal government used well-established grants such as the Temporary Assistance for Needy Families, Community Development Block Grant, and the Social Services Block Grant to provide financial and human recovery assistance to Louisiana and Mississippi residents. Nonprofit as well as Louisiana and Mississippi state officials also identified additional federal funding programs that were in place before the storms that were used to assist in human recovery services such as the Department of Housing and Urban Development’s (HUD) workforce housing grants, Entitlement Cities, and the Low-income Home Energy Assistance Program; FEMA’s Community Disaster Loans; and the Low Income Housing Tax Credit program. There were also several newly created grants with emergency supplemental funds designed to provide human recovery assistance to hurricane-affected areas. Some of these grants include the Department of Health and Human Services’s (HHS) Primary Care Access and Stabilization Grant and HUD’s Disaster Housing Assistance Program. See table 2 for descriptions of selected federal funding programs that provided assistance to victims of the Gulf Coast hurricanes. Louisiana and Mississippi differed in how funds from these programs were distributed. Louisiana created organizations like the nonprofit LFRC and the state-level Louisiana Recovery Authority to serve as custodians and distributors of some of its federal funding, while Mississippi took advantage of provisions in the National Community Service Trust Act to establish a state-level commission to oversee the state’s community service block grants. Using federal funding programs such as those shown in table 2, nonprofit organizations have provided a wide range of recovery services to residents affected by Hurricanes Katrina and Rita including housing, long-term case management, and a variety of counseling services (including crisis management and substance abuse). According to nonprofit officials in both Louisiana and Mississippi, as of early 2010, Gulf Coast residents continue to need services in these areas. CDBG funds are being used by Providence Community Housing, a collaborative effort of Catholic housing and social service organizations in the New Orleans community, to build, rebuild, or repair 7,000 units of affordable houses and apartments over a 5- year period that began in 2006. Some nonprofit officials also told us that long-term case management services were still widely needed. For example, according to officials with the Lutheran Episcopal Services in Mississippi, as of the summer of 2008, this nonprofit had provided case management services for several years to Katrina-affected residents in the Mississippi Gulf Coast region through the efforts of approximately 60 case managers who worked with clients throughout Mississippi. Some nonprofits in our review were instrumental in helping other nonprofits access available federal funds in order to deliver much needed services. The United Methodist Committee on Relief (UMCOR), for example, also served as the umbrella grants manager for Katrina Aid Today (KAT), a national consortium of nine subgrantees. The consortium was required to provide matching funds and was able to put up $30 million of in-kind funds, while FEMA channeled foreign donations of $66 million over a 2-year period. At the completion of its grant-funded activity in March 2008, KAT had enabled case management services for approximately 73,000 households. As the umbrella grants manager, UMCOR provided financial compliance monitoring, technical assistance and training to the nine consortium members. Nonprofits such as Louisiana’s Odyssey House and Mercy Family Center also provided crisis, mental health, and substance abuse counseling made possible as the result of federal funds. In addition, according to officials from the Catholic Charities Archdiocese of New Orleans, their organization contracted with the Louisiana State Office of Mental Health and the resulting Louisiana Spirit hurricane recovery project, funded by FEMA, helped provide intervention and mental health services to its clients. The National Response Framework (NRF) designates the FEMA Voluntary Agency Liaison (VAL) as the primary liaison to the nonprofit community. VALs are responsible for initiating and maintaining a working relationship between FEMA, federal, state, and local agencies and nonprofit organizations. VALs also advise state emergency agencies on the roles and responsibilities of nonprofit organizations active in the recovery. The FEMA VAL system is staffed by a combination of permanent federal employees as well as temporary and term-specified employees whose work focuses on a specific disaster. Among the permanent FEMA employees are 10 regional VALs—one for each FEMA regional office— along with an additional 2 VALs in the Caribbean Area and Pacific Area offices. FEMA also has five VAL staff based at headquarters whose role is to provide the overall VAL program and policy development, the national perspective, training and development to states and regions, support services in the field, coordination with other DHS and FEMA entities with nonprofits as stakeholders, and oversight of the FEMA Donations and Volunteer Management Program including the National Donations Management Network. FEMA also deploys Disaster Assistance Employees, reservists who can be called up to carry out the VAL role in local communities following a specific disaster and, depending on the size of the disaster, typically serve for a period of approximately 50-60 days with a maximum of 50 consecutive weeks in a calendar year. As of March 31, 2010, FEMA had 90 disaster assistance employees in its reserve VAL cadre. In addition, after the Gulf Coast hurricanes, FEMA hired 40 “Katrina VALs,” of which 10 are remaining in Louisiana as of May 24, 2010. These are term-limited federal employees hired locally who are designated to specifically address Katrina-related issues and 10 remain based on FEMA’s need for their continued work. FEMA is not planning to retain these individuals after Katrina-related work is finished. FEMA’s VAL program received general approval from state, local, and nonprofit officials we spoke with. Several nonprofit officials told us that VALs were instrumental in helping initially set up and guide the operation of long-term recovery committees. Officials from state Voluntary Organizations Active in Disasters (VOAD) in Louisiana and Mississippi spoke highly of the respective regional VALs and said they were involved with the VOADs on a regular basis helping coordination between the VOADs and nonprofits. And officials from various nonprofit organizations cited how useful they found the VAL coordination. For example, VALs helped extend the federal government’s reach to the nonprofit sector by also working with state-level intermediaries, such as the Louisiana Family Recovery Corps (LFRC) and the Mississippi Commission for Volunteer Service (MCVS), whose responsibilities included the coordination of nonprofit service providers active in the recovery effort. The LFRC was created in 2005 following the Gulf Coast hurricanes and was designated by the Louisiana legislature in 2007 as the state’s coordinator for human resources. The MCVS is the state’s office of volunteerism through statute and is an affiliate of the federal Corporation for National and Community Service (CNCS) and was designated by the Governor of Mississippi to coordinate the general activities of the nonprofit sector and oversee the implementation of FEMA’s Phase I and Phase II Disaster Case Management Pilot program specifically. Officials from the Mississippi Center for Nonprofits and the Mississippi Interfaith Disaster Task Force said they had good working relationships with the FEMA VALs. Further, officials from state entities such as the Louisiana Recovery Authority characterized their partnership with FEMA VALs as successful and one of the best examples of local coordination they encountered. In November 2005, the President issued an executive order establishing the Office of the Federal Coordinator for Gulf Coast Rebuilding (OFC) with the broad mission of supporting recovery efforts following Hurricanes Katrina and Rita. OFC was created as a response to the unprecedented rebuilding challenges presented by these storms as well as concerns regarding the lack of coordination in the government’s initial response to these events. Although the OFC was originally scheduled to expire in November 2008, the President extended it several times until the office closed on April 1, 2010. In previous work on OFC, we identified four key functions performed by the office, which provides a useful framework for understanding how the office provided support for nonprofits working on Gulf Coast recovery. Nonprofits were directly involved in three of these four OFC functions. First, OFC helped nonprofits to identify and address obstacles to recovery. These obstacles included both challenges facing specific organizations as well as broad problems facing entire communities of which nonprofits were a part. An example of the former occurred when OFC worked with MCVS and FEMA to address contracting challenges involving FEMA’s Disaster Case Management Phase II pilot program. An example of the latter is OFC’s sponsorship of a series of forums and workout sessions, which brought together a diverse group of stakeholders including numerous nonprofits, foundations, and faith-based organizations to discuss impediments to recovery and try to identify potential solutions. The topics of these sessions have included crime, education reform, and economic development. Second, OFC supported nonprofits by sharing and communicating a variety of recovery information. One example of this was the joint OFC- FEMA effort known as the Transparency Initiative that began in February of 2008. This Web-based information sharing effort enabled interested stakeholders, including nonprofits, to track the status of selected public infrastructure rebuilding projects (such as a school or hospital) by providing detailed information on the Public Assistance Grants funds allocated for the project and the project’s status. The initiative has received positive feedback from a range of nonprofits involved in Gulf Coast building including Catholic Charities and Tulane University. OFC also worked to provide updates and other information relating to Gulf Coast recovery though a nonprofit outreach strategy, which has changed and developed over time. During the first few years of the OFC’s operation, although the office compiled a listing of many nonprofits involved in recovery and rebuilding activities, it focused its outreach efforts primarily on large, well-known, national nonprofit organizations, such as Catholic Charities, the Southern Baptist Convention, and the United Methodist Committee on Relief. These organizations had the capacity to work with the government and, in many cases, already had pre- existing relationships with federal and state officials. OFC largely relied on these national organizations to relay information to the local level though their various local partners and affiliates. Given this approach, it is perhaps not surprising that many of the nonprofit officials we spoke with in both Louisiana and Mississippi told us that initially they did not have any direct interaction with OFC following the hurricanes. Since 2009, however, several nonprofit and local officials we spoke with in Louisiana and Mississippi said that OFC has conducted considerably more outreach and become much more involved with them and commended the OFC’s current efforts. According to a senior OFC official, in 2009, the office changed its nonprofit outreach to place a heavier emphasis on direct contact with smaller organizations at the grassroots level. Toward that end, the Federal Coordinator frequently visited Louisiana and Mississippi to personally conduct outreach to a variety of smaller nonprofit organizations and subsequently built a database of 300 to 400 nonprofit organizations. A third way OFC assisted nonprofits is through their facilitation of networks and dialogue among a wide range of recovery stakeholders from federal, state, and local governments, other nonprofits, and as well as the private sector. OFC brought a diverse group of stakeholders together to meet each other and discuss issues of common interest through numerous forums and roundtables. In contrast to the workout sessions mentioned above, the primary goal of these meetings was not to focus on a specific set of challenges, but rather to help foster and expand connections among members of the Gulf Coast recovery community and provide a forum for them to share information with each other. Similar to what took place with OFC’s approach toward information sharing with nonprofits, the way the office fostered networking changed over the years. In its final year of operation, OFC moved away from solely relying on formal events like forums and roundtables to increasingly making use of less formal meetings and networking events. For example, the Federal Coordinator at the time said the office placed a high priority on informal and direct interactions with communities on the ground, with whom she and her staff spent more than half of their time. In addition, OFC facilitated connections between nonprofits that were in the process of applying for recovery grants and other organizations that have had prior success in obtaining such funds and were willing to share their knowledge and expertise. Finally, in its last year of its operation, the OFC facilitated meetings between the White House Office of Faith-Based and Neighborhood Partnership centers established within 12 federal agencies including DHS, HUD, and the Small Business Administration (SBA) with local community and faith-based organizations. Additionally, during its last year of operation, OFC worked to ensure that secretaries of federal agencies relevant to disaster recovery established a senior-level advisor to serve as a point person with OFC as well as the nonprofit organizations on the ground. Some of the federal agencies that established this position included DHS, HUD, SBA, HHS, and the United States Department of Agriculture. Other federal agencies also provided important nonmonetary assistance to nonprofit organizations involved in Gulf Coast recovery. Federal agencies provided trained volunteers and volunteer management services to community-based nonprofits to help them meet increased demand for services. For example, CNCS reported that it provided more than $160 million worth of resources, including more than 105,000 volunteers who contributed more than 5.4 million hours to Gulf Coast states recovering from the 2005 hurricanes. Some of the nonprofit officials we interviewed indicated that they had either hired an AmeriCorps worker or Vista volunteer, or were familiar with their work as a result of partnering with them on various recovery projects. Nonprofits such as Rebuilding New Orleans Together were able to take advantage of a waiver that enabled FEMA and CNCS to cover the cost of some volunteer stipends. Federal agencies also provided nonprofit organizations with training and technical assistance that helped them manage federal grant program requirements. For example, nonprofit officials attended a 2008 White House sponsored conference designed to highlight and strengthen the role of faith-based and community-based organizations in disaster relief and preparedness. The conference, held in New Orleans, Louisiana, offered workshops hosted by federal agencies including the Departments of Justice, Agriculture, Labor, HHS, HUD, Education, Homeland Security, Commerce, and Veteran’s Affairs; the Agency for International Development; and SBA. These workshops provided technical assistance and training designed to help faith-based and community-based nonprofits understand the federal grant process as well as provide networking opportunities with the federal government. The rules and requirements that typically accompany federal grants along with the limitations of many nonprofits’ financial and administrative capacity made it difficult for some organizations to access federal funding to deliver recovery services. Our previous work has shown that many nonprofits struggle to accomplish their mission because they lack the resources that would allow them to better manage their finances and strengthen their administrative or technology infrastructure. We have recently reported that federal grants typically do not provide support for these types of overhead costs, which include administrative or infrastructure costs. In light of this gap, officials from several nonprofits told us that they believed the record keeping, documentation, and reporting requirements of federal grants were too complicated and cumbersome. Nonprofit organizations’ perceptions of federal accountability requirements sometimes also served as an impediment to obtaining funding from the federal government because officials at these organizations perceived compliance with federal grant requirements to be too resource-intensive and not worth meeting the requirements that accompanied such funds. Officials at one nonprofit raised concerns about what they characterized as the “massive” documentation required by the state to justify reimbursements for costs incurred in implementing federal grant programs. According to these nonprofit officials, paperwork sometimes had to be submitted repeatedly and state officials, who were supposed to facilitate communication between federal agencies and nonprofit service providers, did not always know what documents were required. Further, these officials stated that the preparation of the required reimbursement documentation consumed approximately 30 hours of staff time each month and that did not include the time required to comply with other reporting requirements under the grant. According to some state recovery officials, the fear of being audited or being found noncompliant with program regulations caused many nonprofits to shy away from federal disaster assistance, much to the detriment of the state which relies on nonprofits to provide services after a disaster. One nonprofit official, who chose not to apply for federal funds, explained that even if he had the resources to hire the additional staff to fill out all the federal grant paperwork, he would rather put those resources into a direct service, such as rebuilding damaged homes in the community. While recognizing the burdens that may accompany meeting federal grant requirements, it is also important to acknowledge the potential value of such requirements in helping minimize fraud, waste, and abuse and ensuring fiscal accountability to the American taxpayer. Nonprofit officials we spoke with were also concerned with the distribution of federal grant funds. Officials from several nonprofits reported that some federal grant awards were late, putting additional strain on the limited resources of smaller community-based organizations. For example, funding for the FEMA Phase II pilot program for disaster case management was not awarded until July 2008 in Mississippi and October 2008 in Louisiana although the funding period began in June 2008. Phase II grantees had already hired staff and began delivering case management services in anticipation of grant funding being available. As a result, some nonprofits had difficulty meeting expenses while they waited for grant funding to be awarded. As we have previously reported, many of the smaller case management organizations were unable to find alternative resources to pay the case managers hired in June and had to lay off caseworkers while awaiting for federal funding to be made available. On the other hand, larger organizations such as Catholic Charities sometimes had to wait up to 1 year to receive reimbursement for as much as $1 million in grant funds without having to take such actions. In recognition of the widespread devastation that resulted from the 2005 hurricanes and to address the challenges associated with navigating the federal aid process, Congress passed legislation to amend several assistance programs that helped nonprofit organizations deliver federally supported recovery assistance to residents of the Gulf Coast. Most notably, provisions in the Post-Katrina Emergency Management Reform Act of 2006 expanded eligibility requirements for nonprofit organizations to receive FEMA grant assistance, which enabled some nonprofit organizations to receive financial assistance to rebuild their storm- damaged facilities to better serve their clients. Congress also passed special legislation that provided additional cash assistance to hurricane victims through the TANF block grant. In order to deploy more highly trained workers to impacted communities, CNCS waived state matching requirements for sponsoring AmeriCorps workers in Louisiana and counted the cost of housing them as an in-kind match for sponsoring AmeriCorps workers. These program waivers made it easier for nonprofits with limited financial resources to sponsor AmeriCorps workers. In February 2009, President Obama created the President’s Advisory Council for Faith-Based and Neighborhood Partnerships in order to bring together leaders and experts in fields related to the work of faith-based and neighborhood organizations. The council was designed to make recommendations to the government on how to improve partnerships. In March 2010, the council issued its first report which, while not focused on Hurricane Katrina and Rita recovery efforts, included recommendations that could be useful for long-term disaster recovery. For example, the report recommended providing greater flexibility for the coordination and integration of government funds designated for specific program activities. The report went on to suggest that federal agencies develop rules and regulations to encourage coordination and integration of programs and services, and that agencies be mandated to be receptive to requests for rulemaking changes that were aimed at facilitating coordination and integration. In addition, the council recommended that in order to ease the burden on nonprofit social service agencies, agencies remove barriers to service provision such as matching fund requirements, burdensome reporting and regulations, and slow payments and reimbursements. While some recovery officials and nonprofit representatives we spoke with held generally favorable opinions about the usefulness of the assistance provided by FEMA VALs, they identified opportunities for improvement in the areas of training and information sharing. A senior FEMA official told us that, following Hurricane Katrina, the need for VALs considerably outstripped the supply then available in the VAL cadre requiring FEMA to hire temporary VALs without much experience. In addition, ensuring continuity presented a challenge as FEMA experienced a large turnover among VALs in the first year after the disaster. FEMA officials acknowledged that inconsistent performance among VALs was partly due to frequent changes in assigned staff as Disaster Assistance Employee staff was rotated in and out of VAL positions in the months immediately following the storms. However, they noted that this became less of a concern as time passed and more experienced VALs were brought on board and as VALs were hired from the local population and therefore able to remain in their roles for longer periods. FEMA offered several independent study disaster training courses for VALs, including one that is directly related to VAL duties entitled “The Role of Voluntary Agencies in Emergency Management.” FEMA also provided some basic in-person training to VALs in the field, but this training was provided on an ad hoc basis primarily by a single regional VAL. This official provided training for two different regions (a total of 13 states) as well as recovery training to VALs in Louisiana and Mississippi following Hurricanes Katrina and Rita while also handling his regular regional VAL duties. We have previously reported that VALs could benefit from additional training on federal programs and resources. For example, we found that FEMA did not provide training for VALs on Public Assistance Grant policies and recommended that the agency provide role-specific training to VAL staff, including instruction on the Public Assistance Grant program and the policies and opportunities that apply for nonprofit organizations. FEMA has taken steps to respond to our recommendation as well as address other training issues in its VAL program. These include changes that FEMA is making that address our previous report recommendation that FEMA provide role-specific training to VALs. For example, FEMA has issued a VAL Handbook, which provides a written guide on essential VAL activities and procedures, and has been revising its VAL training for the past year. It expects to complete three VAL-specific courses by the end of 2010. One of the courses FEMA officials are working on is an introductory VAL course, for which they are holding focus groups with regional VALs and voluntary agencies for their input. FEMA expects to pilot this course in the fall of 2010. FEMA is also developing a volunteer management course for which they are pulling subject matter experts from the National VOAD, recovery committees, and from state and local officials. The VAL program’s expectation is that the revised VAL training program will be incorporated into a larger FEMA initiative involving credentialing of disaster workers. FEMA officials also acknowledged that VALs would benefit from a mechanism through which they can more effectively share information and best practices that are drawn from a variety of different sources (such as VALs, local recovery partners, and the National VOAD). GAO’s guidance on internal controls encourages agency management to provide effective internal communications as one way to promote an appropriate internal control environment. This guidance suggests agencies establish mechanisms to (1) allow for the easy flow of information down, across, and up within the organization, and (2) enable employees to recommend improvements in operations. Consistent with this concept, FEMA has taken steps to improve information sharing in its VAL program. More specifically, FEMA is developing a knowledge repository known as the VAL Community of Interest on an internal DHS network site. Once operational, FEMA officials expect the site to function as a repository of resources, planning, and best practices that will facilitate information sharing and be readily available to the entire VAL community, even when they are deployed in the field. Collaboration is essential for an effective partnership between the wide range of participants involved in the disaster recovery process. The National Response Framework (NRF) defines the roles of federal, state, local, tribal governments; the private sector; and voluntary organizations in response to disasters. The NRF, which became effective in March 2008, designates 15 emergency support functions that address specific emergency disaster response needs. We have previously reported on the importance of defining roles and responsibilities in both response and recovery. FEMA acknowledges that recent disasters highlight the need for additional guidance, structures and leadership to improve support and management of recovery activities. As we have recently reported, the federal government has taken steps to strengthen the nation’s disaster recovery process. In 2006, Congress required FEMA to develop a national disaster recovery strategy for federal agencies involved in recovery. In response to this mandate, FEMA and HUD are leading a diverse group of federal agencies and other organizations to develop the National Disaster Recovery Framework (NDRF). Among the NDRF’s objectives is to define the federal, state, local, tribal, nonprofit, private-sector, and the individual citizen’s roles in disaster recovery. To date, the NDRF working group has facilitated various meetings as well as developed a Web site for input from federal, state, tribal and local government leaders; recovery-assistance providers; nonprofit organizations; private sector representatives; and interested citizens. The group has developed a draft framework, which includes details about expected roles and responsibilities of nonprofits in disaster recovery. In addition, at the President’s request, the secretaries of DHS and HUD are co-chairing a Long-Term Disaster Recovery Working Group composed of the secretaries and administrators of 20 federal departments, agencies, and offices. This working group was established at the end of September 2009 and joined the NDRF effort started by FEMA in August 2009. This effort to examine lessons learned from previous catastrophic disaster recovery efforts includes areas for improved collaboration and methods for building capacity within state, local, and tribal governments as well as within the nonprofit, faith-based, and private sectors. The working group is charged with developing a report to the President, which will provide recommendations on how to improve long-term disaster recovery. The National VOAD is leading a parallel effort to establish a National Nonprofit Relief Framework (NNRF) intended to complement the NRF and NDRF by providing detailed guidance on nonprofit organization roles and responsibilities, programs, polices, and interagency protocols. FEMA is also involved in this effort and, according to information provided by FEMA officials, this document will serve as a major source of program coordination information for both government and non-governmental organizations involved in all phases of emergency management. According to these officials, the NNRF will help fill a planning void that currently exists regarding what is known about the disaster response and recovery capacity of the nonprofit sector. A final version of the NNRF is expected to be issued in December 2010. In addition to the guidance that frameworks like the NDRF and NNRF can offer, cooperative agreements provide another mechanism that can further clarify the roles and responsibilities of specific nonprofits involved in recovery activities. Several nonprofit and federal officials we spoke with identified such agreements or memorandums of understanding established between FEMA and specific nonprofit organizations as a tool to clarify expectations and avoid confusion that can arise in the wake of a disaster. These cooperative agreements could provide a road map for federal- nonprofit partnerships by outlining the functional capabilities and resources of each partner, and by outlining implementation strategies for delivering critical recovery services. Such agreements could also help avoid duplication of efforts among the various disaster recovery players and expedite recovery efforts. We provided a draft of this report to the Secretary of Homeland Security for review and comment. DHS concurred with the report but did not provide us with formal written comments. The department did provide several technical clarifications that we incorporated as appropriate. We also sent drafts of the relevant sections of this report to cognizant officials from the nonprofits involved in the specific examples cited in this report and incorporated their comments as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. We will then provide copies of this report to other interested congressional committees; the Secretary of Homeland Security; the Administrator of the Federal Emergency Management Agency; and federal, state, local, and nonprofit officials we contacted for this review. This report also is available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-6806 or at czerwinskis@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. To address our first objective on how the federal government has worked with nonprofit organizations to facilitate Gulf Coast Recovery following Hurricanes Katrina and Rita in 2005, we first conducted a systematic review and synthesis of GAO and other reports to identify (a) the range of federal programs used to support Gulf Coast recovery; (b) the types of nonprofit organizations that provide federally-supported recovery assistance; and (c) the types of service delivery mechanisms federal agencies used when working with nonprofit organizations. For this objective we also interviewed officials involved in recovery efforts from federal, state, and local governments, as well as officials from nonprofit organizations, to help us refine our understanding of the range of federal government relationships with nonprofit organizations active in Gulf Coast recovery. To address our second objective describing steps taken by the federal government to address challenges encountered when working with nonprofits to deliver recovery services, we conducted interviews with federal, state, local, and nonprofit officials and obtained supporting documentation of federal actions where appropriate. We focused our review on Louisiana and Mississippi because these two states sustained the most damage from Hurricanes Katrina and Rita and thus accounted for a large portion of the federal funding made available to Gulf Coast states for recovery. In addition, given their role in disaster recovery, we placed a focus on the activities of two components of the Department of Homeland Security—the Federal Emergency Management Agency and the Office of the Federal Coordinator for Gulf Coast Rebuilding—in describing how the federal government has worked with nonprofits on Gulf Coast recovery. We selected a variety of individuals and organizations in order to capture a wide range of perspectives including (a) the range of types of nonprofit organizations active in Gulf Coast recovery, (b) the broad range of federally supported recovery services delivered to residents of the affected areas, (c) the range of service delivery mechanisms used to deliver services, and (d) individuals and organizations identified through our literature review, informational interviews, and/or referrals received during the course of our work. In total, we interviewed federal, state, local, and nonprofit officials from the following 48 agencies and organizations. While findings from our interviews cannot be generalized, this approach allowed us to capture important variability within the various sectors. Corporation for National and Community Service, Washington, D.C. Federal Emergency Management Agency (FEMA) Headquarters, Washington, D.C. FEMA Louisiana Transitional Recovery Office, New Orleans, La. FEMA Mississippi Transitional Recovery Office, Biloxi, Miss. FEMA Region IV, Atlanta, Ga. FEMA Region VI, Denton, Tex. Office of the Federal Coordinator for Gulf Coast Rebuilding, Washington, D.C. Louisiana Department of Social Services, Baton Rouge, La. Louisiana Recovery Authority, Baton Rouge, La. Louisiana Serve Commission, Baton Rouge, La. Mississippi Department of Human Services, Jackson, Miss. Mississippi Office of the Governor, Office of Recovery and Renewal, Jackson, Miss. Office of Emergency Preparedness, City of New Orleans, New Orleans, La. Office of Intergovernmental Relations, City of New Orleans, New Orleans, La. America Speaks, Washington, D.C. Annunciation Mission, Free Church of the Annunciation, New Orleans, La. Back Bay Mission, United Church of Christ, Biloxi, Miss. Baptist Association of Greater New Orleans, New Orleans, La. Baptist Community Ministries, New Orleans, La. Broadmoor Development Corporation, New Orleans, La. Catholic Charities Archdiocese of New Orleans, New Orleans, La. Greater Light Ministries, New Orleans, La. Greater New Orleans Disaster Recovery Partnership, New Orleans, La. Hope Community Development Agency, Biloxi, Miss. Israelite Baptist Church, New Orleans, La. Katrina Relief, Poplarville, Miss. Louisiana Association of Nonprofit Organizations, Baton Rouge and New Orleans, La. Louisiana Family Recovery Corps, Baton Rouge, La. Louisiana Voluntary Organizations Active in Disaster, Baton Rouge, La. Lutheran Episcopal Services in Mississippi, Jackson, Miss. Mississippi Center for Nonprofits, Jackson, Miss. Mississippi Commission for Volunteer Service, Jackson, Miss. Mississippi Gulf Coast Community Foundation, Gulfport, Miss. Mississippi Interfaith Disaster Task Force, Biloxi, Miss. Mississippi Voluntary Organizations Active in Disaster, Jackson, Miss. Rand Gulf States Policy Institute, New Orleans, La. Rebuilding Together New Orleans, New Orleans, La. Recovery Assistance, Inc., Ministries, Biloxi, Miss. Restore, Rebuild, Recover Southeast Mississippi (R3SM), Hattiesburg, Miss. Salvation Army, Jackson, Miss. St. Bernard Project, Chalmette, La. The Advocacy Center, New Orleans, La. Trinity Christian Community, New Orleans, La. Tulane University Center for Public Service, New Orleans, La. United Methodist Committee on Relief/Katrina Aid Today, New York, N.Y. United Way for the Greater New Orleans Area, New Orleans, La. Volunteers of America of Greater New Orleans, New Orleans, La. Waveland Citizens Fund, Poplarville, Miss. We conducted this performance audit from February 2008 through July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We requested comments on a draft of this report from the Department of Homeland Security (DHS). DHS concurred with the report but did not provide formal written comments. However, the department had several technical clarifications that we incorporated as appropriate. We also provided drafts of relevant sections of this report to state, local, and nonprofit officials involved in the specific examples cited in this report, and incorporated their comments as appropriate. In addition to the contact named above, Peter Del Toro (Assistant Director); Michelle Sager (Assistant Director); Jyoti Gupta; Kathleen Drennan; Anthony Patterson; and Jessica Thomsen made key contributions to this report.
Residents of the Gulf Coast continue to struggle to recover almost 5 years after Hurricanes Katrina and Rita devastated the area in August and September of 2005. In many cases the federal government coordinates with, and provides support to, nonprofit organizations in order to deliver recovery assistance to impacted residents. A better understanding of how the federal government works with nonprofit organizations to provide such assistance may be helpful for recovery efforts on the Gulf Coast as well as for communities affected by major disasters in the future. GAO was asked to describe (1) how the federal government has worked with nonprofit organizations to facilitate Gulf Coast recovery following the 2005 hurricanes and (2) steps the federal government has taken to address challenges to strengthen relationships with nonprofits in the future. Toward this end, GAO reviewed the applicable disaster recovery literature and relevant supporting documents. GAO also interviewed officials from federal, state, and local governments as well as a wide range of nonprofit officials involved in Gulf Coast recovery. The federal government used a variety of direct and indirect funding programs to support the delivery of human recovery services by nonprofit organizations following Hurricanes Katrina and Rita in areas such as housing, long-term case management, and health care. These programs included well-established grants such as the Department of Health and Human Services' (HHS) Temporary Assistance for Needy Families and its Social Services Block Grant, as well as the Department of Housing and Urban Development's (HUD) Community Development Block Grant. Programs established in the wake of the 2005 hurricanes also provided funding to nonprofits offering recovery services. These included HHS's Primary Care Access and Stabilization Grant and HUD's Disaster Housing Assistance Program. The federal government also supported nonprofit organizations through coordination and capacity building. For example, the Federal Emergency Management Agency (FEMA) used Voluntary Agency Liaisons (VAL) to help establish and maintain working relationships between nonprofits and FEMA as well as other federal, state, and local agencies. The Office of the Federal Coordinator for Gulf Coast Rebuilding in the Department of Homeland Security provided a variety of assistance to nonprofits including problem identification, information sharing, and networking. Other federal agencies also worked to bolster the capacity of nonprofits by providing temporary staff, training, and technical assistance to nonprofit organizations. The federal government is taking steps to address several challenges and strengthen its relationship with nonprofit organizations providing recovery assistance. For example, nonprofit officials GAO spoke with cited challenges with the federal disaster grant process including what they viewed to be complicated record keeping and documentation procedures as well as other requirements to obtain aid. A report issued earlier this year by the President's Advisory Council for Faith-Based and Neighborhood Partnerships recognized the need to ease the administrative burden on nonprofits and contains specific recommendations for action. In an effort to make it easier for nonprofits with limited financial resources to obtain the services of AmeriCorps workers, the Corporation for National and Community Service waived the usual matching requirements in the wake of the 2005 hurricanes. In addition, FEMA is taking steps to address challenges regarding the training of its VAL staff. Following an earlier GAO recommendation that VALs could benefit from additional training regarding federal recovery resources, FEMA issued a VAL handbook and is developing several VAL training courses that it expects to implement by the end of 2010. Finally, although there has been a lack of specific guidance regarding the role of nonprofits in disaster recovery, the federal government has taken steps to address this gap. FEMA and HUD have led a multi-agency effort that resulted in the development of a draft National Disaster Recovery Framework. Among other things, this framework contains specific information about the roles and responsibilities of nonprofits in disaster recovery. GAO is not making new recommendations in this report but discusses the implementation status of a relevant prior recommendation.
Federal policy calls for critical infrastructure protection activities that are intended to enhance the cyber and physical security of private infrastructures, such as telecommunication networks, that are essential to national and economic security. DHS, Commerce, and FCC have critical infrastructure protection responsibilities over issues related to the security of communications networks. Appendix IV provides additional information on these agencies’ legal authority related to supply chain security for commercial communication networks. In addition, some executive actions have focused on supply chain risk management issues related to cybersecurity, which are described below. The Homeland Security Act of 2002 established DHS and assigned it the following critical infrastructure protection responsibilities: develop a comprehensive national plan for securing the key resources and critical infrastructure of the United States and disseminate, as appropriate, information to assist in the deterrence, prevention, and pre-emption of or response to terrorist attacks. Commerce is responsible under Presidential Policy Directive 21 (PPD- 21), in coordination with other federal and nonfederal entities, for improving security for technology and tools related to cyber-based systems, and promoting the development of other efforts related to critical infrastructure to enable the timely availability of industrial products, materials, and services to meet homeland security requirements. Within Commerce, the National Institute of Standards and Technology (NIST) has responsibility for, among other things, cooperating with other federal agencies, industry, and other private organizations in establishing standard practices, codes, specifications, and voluntary consensus standards. Under PPD-21, FCC is responsible for exercising its authority and expertise to partner with other federal agencies on: identifying and prioritizing communications infrastructure; identifying communications sector vulnerabilities and working with industry and other stakeholders to address those vulnerabilities; and working with stakeholders, including industry, and engaging foreign governments and international organizations to increase the security and resilience of critical infrastructure within the communications sector and facilitating the development and implementation of best practices promoting the security and resilience of the nation’s critical communications infrastructure. Supply chain risk management has been the focus of executive actions; for example, in January 2008, the President directed the development of a multi-pronged approach for addressing global supply chain risk management as part of the Comprehensive National Cybersecurity Initiative (CNCI), an ongoing effort. More recently, at the direction of the President, a report on the federal government’s cybersecurity-related activities was released, which discussed, among other things, the importance of prevention and response against threats to the supply chains used to build and maintain the nation’s infrastructure. Additionally, in response to one of the report’s recommendations, the President appointed a national cybersecurity coordinator in December 2009. The United States has several nationwide voice and data networks that along with comparable communications networks in other countries, enable people around the world to connect to each other, access information instantly, and communicate from remote areas. These networks consist of core networks, which transport a high volume of aggregated voice and data traffic over significant distances, and access networks, which are more localized and connect end users to the core network or directly to each other. Multiple network providers in the United States operate distinct core and access networks that interconnect to form a national communications infrastructure (see fig. 1). Routers and switches send traffic, in the form of data packets, through core and access networks. These pieces of equipment read the address information located in the data packet, determine its destination, and direct it through the network. Routers connect users between networks, while switches connect users within a network. The evolved packet core is the mobile core network used for long-term evolution (LTE) systems, a standard for commercial wireless technologies. LTE is widely accepted as the foundation for future mobile communications. Several major network equipment manufacturers are competing to provide equipment to wireless network providers that are upgrading their networks to deploy LTE. Communications infrastructure is increasingly composed of components that are designed, developed, and manufactured by foreign companies or by U.S. companies that rely on suppliers that integrate foreign components into their products. Furthermore, we have previously reported that according to NIST, today’s complex global economy and manufacturing practices make corporate ownership and control more ambiguous when assessing supply chain vulnerabilities, as companies may conduct business under different names in multiple countries. For example, foreign-based companies sometimes manufacture and assemble products and components in the United States, and U.S.-based companies sometimes manufacture products and components overseas or employ foreign workers domestically. Figure 2 depicts some of the locations that major network equipment manufacturers we spoke with use for different steps in the production process. From 2007 through 2011, communications network equipment imported for the U.S. market came from over 100 foreign countries. While the import data do not distinguish whether the imports are from U.S. or foreign-based companies, according to International Trade Commission staff, many of the imports are from U.S. companies manufacturing abroad. Imports of network equipment to the United States grew about $10 billion (about 76 percent) over a 5-year period, from $13.5 billion in 2007 to $23.8 billion in 2011, as shown in figure 3. During this same period, imports from China, which was the leading source country, grew by $4.9 billion (112 percent). In 2011, the top five sources of U.S. imports of networking equipment were China ($9.3 billion), Mexico ($5.2 billion), Malaysia ($2.6 billion), Thailand ($1.9 billion), and Canada ($713 million). While there is no comprehensive unclassified compilation of attacks to core networks that originated in the supply chain, reliance on a global supply chain introduces some degree of risk. Risks include threats posed by actors such as foreign intelligence services or counterfeiters that may exploit vulnerabilities in the supply chain, thus compromising the availability, security, and resilience of the networks. Multiple points in the supply chain may present vulnerabilities that threat actors could exploit. For example, a lack of adequate testing for software patches and updates could leave a communications network vulnerable to the insertion of code intended to allow unauthorized access to information on the network. Routers and switches can present points of vulnerability because they connect to the core network and are used to aggregate data, according to an FCC official with whom we spoke. For example if a threat actor gained control of a router, that actor could disrupt data traffic to and inside core networks. Supply chain threats and vulnerabilities are discussed in more depth in appendixes II and III, respectively. The network providers and equipment manufacturers we met with told us they address the potential security risks of using foreign-manufactured equipment through voluntary risk management practices. Officials from the companies and industry groups that we spoke with said that they consider the level of risk to be affected not by where equipment and components are made, but how they are made, particularly the security procedures implemented by manufacturers. Many of these officials also said they were not aware of any intentional attacks originating in the supply chain, and some said that they consider the risk of this type of attack to be low. Officials from four industry groups and one research institution we spoke with told us that supply chain attacks are harder to carry out and require more resources than other modes of attacks, such as malicious software uploaded to equipment through the Internet, and, therefore, are the less likely vehicle to be used by potential attackers. Three network providers told us the most common anomalies found in equipment are caused by erroneous coding in the software, anomalies that are unintentional. Such anomalies could, however, lead to exploitable vulnerabilities, according to officials from a third-party testing firm. Nonetheless, the companies we spoke with told us that security is a high priority because their brand image and profitability depends, in part, on avoiding any type of breach of security or disruption of service. Network providers and equipment manufacturers told us that their voluntary risk management practices are in the areas of vendor selection, vendor security requirements, and equipment testing and monitoring, as described below and in figure 4. They said these practices are often a part of their company’s overall security plans and procurement processes and are applied throughout the entire life cycle of their equipment. The network providers and equipment manufacturers we spoke with said that ensuring the security and reliability of their equipment requires them to carefully select their vendors. In addition to the typical considerations when selecting vendors—prices and product performance, the vendor’s financial stability, and maintenance and service options offered—the providers and manufacturers told us that they consider security-related factors, such as the vendor’s security practices, the industry standards related to security the vendors follow, and past security performance or reputation. Another consideration for some network providers when selecting vendors is how critical the equipment being procured is to network operations. Components that will be used in the core network, for example, are typically purchased from vendors that network providers consider most trustworthy. Some network providers told us they also value having long-term relationships with equipment manufacturers, because they are able to develop trust over time that the manufacturer will provide them with reliable and secure equipment and services. While network providers said that they are aware of security concerns about vendors from certain countries, they do not exclude vendors from consideration that have manufacturing locations in those countries, in part, because the global nature of the supply chain would make excluding all vendors located in a particular country difficult. Some network providers told us they may exclude or avoid vendors based on factors such as the ownership of the company or concerns about the security of the vendor’s product, and two told us that federal government officials had advised against using specific vendors for national security reasons, as discussed in the following section of this testimony. Network providers and equipment manufacturers told us that once vendor selections are made, they might require vendors to follow certain security practices, often as part of their contracts. Network providers told us that the security practices they require are typically based on the criticality or perceived risk of the project and the role of the vendor. For example, one network provider we spoke with generates a vendor risk profile for purchases that it considers critical or high risk or if it does not have an established relationship with the vendor. The company uses the profile to collect information on the product or service being provided, the vendor’s access to proprietary information, such as the company’s financial information or customer sensitive information, and available information on a vendor’s subcontractors. This information enables the network provider to identify areas of concern to investigate and to customize the security requirements placed on the vendor. The security practices that both network providers and equipment manufacturers may require of their vendors include the following: physical security measures, such as procedures for securing manufacturing sites, transporting equipment and parts, and packaging equipment and parts; access controls, such as limiting in-house and vendor employees’ access to equipment, maintaining records of who accesses equipment, and restricting who performs patches and updates; and employee security measures, such as requiring employees to have background checks and use passwords and user verification to access systems. Additionally, network providers and equipment manufacturers told us they might require vendors to allow inspections of their manufacturing sites to check for compliance with the agreed-upon security practices. Representatives from the companies we met with told us that they conduct inspections at varying frequencies and for a number of reasons, including if the vendor is providing a critical piece of equipment or part or is identified as high risk, or if the equipment is performing poorly. Network providers and equipment manufacturers told us that equipment is tested to detect vulnerabilities. This is done throughout the life cycle of equipment, including during product development, before and after implementation, and when any patches or updates are applied. After equipment is installed into the network, network providers also monitor the equipment constantly to detect abnormal traffic or problems with the equipment that might indicate a potential cyber attack and disrupt network service. According to officials from a third-party testing firm, there are several tools available to test the security of equipment, including: vulnerability scans—searching software and hardware for known vulnerabilities; penetration testing—executing deliberate attempts to attack a network through the equipment, sometimes targeting specific vulnerabilities of concern; and source code analysis—evaluating in depth the underlying software code that can uncover unknown vulnerabilities that would not be detected during a vulnerability scan. Testing can be performed by the network provider, the equipment manufacturer, or independent third-party testing firms. Most network providers and several equipment manufacturers told us they use third- party testing firms on an ad-hoc basis, such as when requested by a customer or when they do not have the expertise or resources to conduct appropriate tests. Network providers and equipment manufacturers also use these firms when they want to analyze software or firmware source code because equipment manufacturers are reluctant to provide network providers with source code, which they consider intellectual property. Two network providers and one equipment manufacturer told us they use a trusted delivery model that employs a third-party testing firm to ensure that the equipment purchased and received is secure. Under this model, the third-party testing firm tests equipment over the full life-cycle of equipment, including when there are software patches or hardware updates, and uses a number of different techniques, such as source code analysis. Additionally, the testing firm verifies that the equipment delivered and implemented by the network provider matches the equipment tested and that the equipment manufacturer followed certain security procedures. However, a recent congressional report identified the following potential limitations of third-party testing and available testing techniques. These firms typically test equipment that is configured in a specific and restrictive way that may differ from the configuration that is actually deployed in the network. The behavior of equipment can vary widely depending on how and where it is configured, installed, and maintained. The pace of technology is changing more rapidly than third-party evaluation processes. Vendors that finance their own security evaluations create a conflict of interest that can lead to skepticism about the independence and rigor of the result. Officials from a third-party testing firm told us that there are evaluation processes, such as the trusted delivery model, that test the equipment delivered to network providers and deployed into the network against the equipment tested. Although they said it is impossible to test every piece of equipment, the firm tests a statistically significant random selection of equipment that represents all manufacturing lots and geographic locations. They also told us that independence is critical to their business. The officials said the vendor has no visibility into the evaluation process, and, typically, the vendor is obligated to report testing results. The congressional report further stated that regardless of the testing technique employed, fully preventing a determined and clever insider from intentionally inserting flaws into equipment means finding and eliminating every significant vulnerability from a complex product, a monumental, or even—in the words of one congressional report— ”virtually impossible” task. Similarly, officials from one third-party testing firm whom we spoke with told us that they have concerns about the effectiveness of network monitoring as a way of detecting vulnerabilities. They said that security monitoring, in most cases, can only detect attempts to exploit known vulnerabilities, or in more complex approaches, identify potentially dangerous anomalous network activity. And as systems evolve and are updated, new vulnerabilities that have long existed in the underlying equipment may be inadvertently exposed in a manner that makes exploitation possible. There are currently no industry standards that address all aspects of supply chain risk management, including supply chain security, and few best practices that provide industry with guidance on determining what practices to use. However, according to officials from companies and industry groups and the experts we spoke with, there are several industry-led efforts to establish standards and best practices and share information related to supply chain security. Some network providers and equipment manufacturers told us that they developed their own practices based on national and international standards that address information systems’ security, such as those practices described within the certification program called the Common Criteria, and those developed by the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), NIST, and the Internet Engineering Task Force. However, these standards are not specific to supply chain security. Additionally, federal agencies that we have identified as having jurisdiction over issues related to the security of communications networks have not established supply chain security requirements for the communications industry, as discussed further in the next section of this testimony. The companies we spoke with also told us they have been participating in information sharing about cybersecurity issues, including supply chain security, in venues including informal conversations, industry group meetings, and discussions with the federal government. Below are the two industry-led efforts most frequently discussed during our interviews. The Open Group Trusted Technology Forum (OTTF) The OTTF is a forum within The Open Group, which is a global consortium that represents all sectors of the IT community including academics, equipment manufacturers, federal agencies, and software developers. The Open Group establishes certification programs and voluntary consensus standards, such as standards for security, enterprise architecture, interoperability, and systems management. The OTTF’s objective is to create and adopt standards to improve the security and integrity of commercial off-the-shelf information and communication products, including hardware and software, as they are being developed and moved through the global supply chain. In April 2013, the OTTF published a voluntary standard that is intended to enhance the security of global supply chains by mitigating the risks of tainted and counterfeit products. The OTTF intends to provide an accreditation program that will allow information and communication providers, equipment manufacturers, and those vendors that supply software or hardware components to the providers and manufacturers, to become accredited if they meet the standard’s requirements and conformance criteria. Officials from DOD said that although it is unknown whether industry will adopt this standard and what the associated costs will be to maintain and use it, developing such process-based certifications along with product certifications, such as the Common Criteria, may prove beneficial in covering more of the global IT supply chain. Communications Sector Coordinating Council (CSCC) In accordance with Homeland Security Presidential Directive 7, the CSCC is an industry-led group that represents the viewpoints from the U.S. communications sector and facilitates coordination between industry and the federal government on improving physical and cyber security of the communications critical infrastructure., Representatives from the CSCC told us that the CSCC began meeting with the federal government in March 2011 to discuss supply chain security, which led to the creation of a CSCC working group to facilitate dialogue, planning, and coordination among the government and industry on supply chain risk management. This group’s objectives include enhancing the government’s understanding of industry’s current risk management practices, the government’s sharing of supply chain threat information, and identifying and sharing best practices for supply chain risk management. The working group is scheduled to conclude its work in December 2013. The White House released an Executive Order in February 2013 that is likely to have an impact on communications supply chain security. We identified other federal efforts, such as the Interim Telecommunications Sector Risk Management Task Force, that could impact communications supply chain security, but the results of those efforts are considered sensitive, so we do not include them here. An Executive Order released in February 2013 calls for NIST to develop a framework to reduce cyber risks to critical infrastructure and for DHS and others to spearhead increased information sharing between the federal government and owners and operators of critical infrastructure including communications networks. As discussed below, federal officials told us that supply chain security may be included in these efforts, but the extent has yet to be determined. The Executive Order instructs NIST to develop a cybersecurity framework (framework) to reduce cyber risks to critical infrastructure using an open public review and comment process. This framework would provide technology-neutral guidance to critical infrastructure’s owners and operators. In February 2013, NIST published a request for information (RFI) in which NIST stated it is conducting a comprehensive review to develop the framework and is seeking stakeholder input. According to NIST officials, the extent to which supply chain security of commercial communications networks will be incorporated into the framework is largely dependent on the input it receives from stakeholders. The officials added that while it is reasonable to assume that they may receive comments about supply chain security, which crosses critical infrastructure sectors, it is possible they may not receive comments specific to the use of foreign-manufactured equipment in commercial communication networks. In adopting the preliminary framework, the Executive Order requires agencies with responsibility for regulating the security of critical infrastructure to provide a report—in consultation with national security staff, DHS, and the Office of Management and Budget—which states whether the agencies have clear authority to establish requirements based on the framework and whether any additional authorities are necessary. DHS officials stated that without seeing the context of the report, they could not say whether it would identify authorities specifically related to the supply chain security of commercial communications networks and the conditions under which those authorities could be used. The Executive Order also calls for the federal government to increase information sharing with owners and operators of critical infrastructure, including communications networks, information sharing that could include sharing of supply chain-related threats. The order directs DHS to share unclassified cyber threat information and expand a voluntary information-sharing program that provides classified cyber threat information to critical infrastructure owners and operators with government security clearances. DHS officials told us that they foresee that this information sharing could encompass threats originating in the supply chain. The Australian government is considering a reform proposal to establish a risk-based regulatory framework to better manage national security challenges to Australia’s telecommunications infrastructure. The Attorney-General, in consultation with industry, has created a proposal that addresses supply chain risks by introducing a universal obligation on carriers and carriage service providers to protect their networks and facilities from unauthorized access or interference. Specifically, the proposal requires carriers and carriage service providers to be able to demonstrate competent supervision and effective controls over their networks. The government would also have the authority to use enforcement measures to address noncompliance, as described in table 1. Under this framework, the government would provide guidance to inform carriers and carriage service providers how they can maintain competent supervision and effective control over their networks and educate carriers and carriage service providers on national security risks. The approach would require amendments to telecommunications statutes, such as the Telecommunications Act and other relevant laws. India enacted a new approach in 2011 through its operating licenses for telecommunications service providers. India’s Department of Telecommunications (DoT) is responsible for granting operating licenses to India’s telecommunications service providers. In May 2011, DoT issued amendments to its operating licenses that included new or revised requirements for providers and equipment vendors to improve the security of India’s telecommunications network infrastructure. Under the amendments, telecommunications service providers are to be completely responsible for security of their networks, including the supply chain of their hardware and software. Key security requirements are described in table 2. The United Kingdom (UK) enacted new security and resilience requirements for network and service providers in 2011 through revisions to its Communications Act of 2003. The UK’s Office of Communications (Ofcom), the independent regulator and competition authority for the UK communications industries, is responsible for enforcing the requirements. According to Ofcom officials, these requirements address supply chain risks by focusing on the ability of the network and service providers to manage the overall security of their infrastructure and maintain network availability. Ofcom officials told us they are still developing their overall approach to enforcing the requirements, which are described in table 3. A Chinese network equipment manufacturer voluntarily partnered with the UK government to establish a Cybersecurity Evaluation Centre to test its equipment for use in UK networks. According to officials from Ofcom and the Chinese manufacturer, the facility was created in part to address national security concerns related to using equipment from a vendor that did not have an established relationship with the UK government or UK network providers. The Chinese manufacturer provides the facility with the design and source code for all equipment, which is then tested for vulnerabilities by staff with UK security clearances. According to officials from Ofcom and representatives from the Chinese manufacturer, network providers cannot use the equipment until it has been approved through the testing process. In addition, the UK government requires all software patches be tested using the same process before they are installed on the equipment by the network providers. According to officials from the Chinese manufacturer, this voluntary approach helped increase trust with its customers. However, in November 2012, the chairman of the UK parliament’s intelligence and security committee confirmed to us that the committee is reviewing the commercial relationship between the Chinese manufacturer and a British telecommunications provider and the Chinese manufacturer’s overall presence in the UK’s critical national infrastructure. The U.S. government’s Committee on Foreign Investment in the United States (CFIUS) conducts reviews to determine whether certain transactions that could result in foreign control of U.S. businesses pose risks to U.S. national security. Industry representatives from the U.S. Communications Sector Coordinating Council told us the council and participating federal entities are discussing whether a voluntary notification process similar to CFIUS should be used for network provider purchases of foreign-manufactured equipment. In addition, the House Intelligence Permanent Select Committee report recommended that legislative proposals seeking to expand CFIUS to include purchasing agreements should receive thorough consideration by relevant congressional committees. CFIUS follows a process established by statutes and regulations for examining certain transactions that could result in foreign control of U.S. businesses. Parties generally submit voluntary notices of transactions to CFIUS, but CFIUS also has the authority to initiate reviews unilaterally. Pursuant to the Foreign Investment and National Security Act of 2007, CFIUS must complete a review of a covered transaction within 30 days. In certain circumstances, following the review, CFIUS may initiate an investigation that may last up to 45 additional days., If CFIUS finds that the covered transaction presents national security risks and that other provisions of law do not provide adequate authority to address the risks, then CFIUS may enter into an agreement with, or impose conditions on, the parties to mitigate such risks. If the national security risks cannot be resolved and the parties do not choose to abandon the transaction, CFIUS may refer the case to the President, who can choose whether to suspend or prohibit the transaction., As shown in table 4, presidential decisions are rare. Table 4 also shows the number of CFIUS covered transactions, withdrawals, and other outcomes from calendar years 2009 to 2011. Discussions between the Communications Sector Coordinating Council and participating federal entities on adapting a CFIUS-type voluntary notification process for use on equipment purchases are ongoing, and it is not clear how the proposal will develop, if at all. The council is trying to understand the threats the government is concerned about and whether these could be best addressed by a CFIUS- type process or some other means. According to some members of the council, options range from a simple notification process, wherein network providers notify the federal government of proposed equipment purchases, to a complete review and approval process of the proposed transactions, including the aforementioned 30-day review and 45-day investigation periods. While these approaches are intended to improve supply chain security of communications networks, they may also create the potential for trade barriers, additional costs, and constraints on competition. Additionally, there are other issues specific to the approach of expanding the CFIUS process to include foreign equipment purchases. We identified these issues based on interviews with foreign government officials and U.S. industry stakeholders, and our review of foreign proposals and other documentation. While the issues we identified provide a range of considerations that U.S. federal agencies would need to take into account if they chose to implement these approaches, they do not represent an exhaustive list. Some of the approaches may create a trade barrier or cause trade disputes. The Office of the United States Trade Representative (USTR) has reported that standards-related measures that are non-transparent, discriminatory, or otherwise unwarranted can act as significant barriers to U.S. trade. USTR has reported concerns regarding some of India’s licensing requirements for telecommunications service providers including the following: the requirement for telecommunications equipment vendors to test all equipment in labs in India; the requirement to allow the service provider and government agencies to inspect a vendor’s manufacturing facilities and supply chain and perform security checks for the duration of the contract to supply the equipment; and the imposition of strict liability and possible blacklisting of a vendor for taking inadequate precautionary security measures, without the right to appeal and other due process guarantees. These requirements may result in trade-distorting conditions by making it more expensive and burdensome for foreign equipment manufacturers to do business in India. According to USTR, it is too early to evaluate whether the proposed reforms in Australia, new requirements and voluntary Cybersecurity Evaluation Centre in the UK, and an extension of CFIUS to equipment purchases would create trade barriers or cause trade disputes. Three U.S.-based equipment manufacturers told us that extending CFIUS to equipment purchases could cause other countries to implement similar policies, which may result in barriers to entry in other countries and trade disputes. All of the approaches may increase costs to industry and the federal government. The Australian and UK governments recognize that changes to the regulatory framework would include a cost to industry, which may increase prices for consumers. Representatives from the Chinese equipment manufacturer stated that although voluntarily setting up the Cybersecurity Evaluation Centre was expensive, it was the cost of doing business in the UK. Similarly, one telecommunications industry group reported that India’s 2011 License Amendments would increase compliance costs for Indian telecommunications services providers. The majority (6 of 8) of equipment manufacturers we spoke with told us that any proposal to extend CFIUS to equipment purchases would increase costs for network providers, equipment manufacturers, and ultimately consumers. In addition, it is likely that the responsible federal agencies will also incur additional administrative costs in implementing any supply chain risk management requirements. All of the approaches may have an impact on the business decisions of network providers and equipment manufacturers and competition within the industry. The Australian government is aware that its proposed framework could have effects on the industry, and it is trying to anticipate these effects and explore how they might be mitigated. It is also seeking input from industry and government stakeholders on any potentially broader effects on competition in the telecommunications market and on consumers. Similarly, a telecommunications industry group reported the Indian requirements complicate the relationship between telecommunications service providers and their equipment vendors, creating concerns about access to intellectual property and giving each an incentive to shift the risk of enforcement onto the other (though the current requirements still place the principal obligations on the licensees). Representatives from a U.S.-based equipment manufacturer told us that extending the CFIUS process to equipment purchases could potentially lead to vendors being excluded from the U.S. market without appeal rights; this would result in limited competition and therefore potentially higher prices for consumers. Similarly, four network providers and one think tank also told us that extending CFIUS to equipment purchases would limit competition and raise costs. The appropriate universe of equipment supply contracts that would be subject to review would need to be defined if the CFIUS process were extended to cover these transactions. There were 269 notices of transactions covered by the CFIUS process from 2009 through 2011. By comparison, four network providers and two equipment manufacturers we spoke with noted that network providers conduct thousands of transactions a year and expressed concerns about their being subject to a CFIUS-type process. Specifically, the two manufacturers said it would be difficult for CFIUS members to oversee all of these transactions in a timely fashion, adding expense to the procurement process for network providers and equipment manufacturers that could be passed on to consumers. In addition, CFIUS member agencies may incur significant administrative costs if asked to review thousands of procurement transactions per year. Chairman Walden, Ranking Member Eshoo, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are listed in appendix V. We focused our review on the core networks that constitute the backbone of the nation’s communications system and the equipment—such as routers, switches and evolved packet cores—that transport traffic over these networks. We also focused on the wireline, wireless, and cable access networks used to connect end users to the core wireline networks. We did not address broadcast or satellite networks because they are responsible for a smaller volume of traffic than other networks. To obtain information on all of our objectives we conducted a literature review and semi-structured interviews with or obtained written comments from academics, industry analysts, and research institutions; federal entities; domestic and foreign equipment manufacturers; industry and trade groups; network providers; and security and software audit firms as shown in table 5. We selected the stakeholders based on relevant published literature, our previous work, stakeholders’ recognition and affiliation with a segment of the communications industry (domestic and foreign equipment manufacturers, industry and trade groups, network providers and so forth), and recommendations from the stakeholders interviewed. We also spoke with federal entities that have a role in addressing cybersecurity, international trade, and the Committee on Foreign Investment in the U.S. (CFIUS). To describe how communications network providers and equipment manufacturers help ensure the security of foreign-manufactured equipment that is used in commercial communications networks, we interviewed network providers; domestic and foreign equipment manufacturers (routers, switches, and evolved packet cores); and industry and trade groups. Information we collected included specific industry practices, such as the use of security provisions in contracts; the extent to which domestic and international standards help guide their practices; and any industry-wide efforts addressing supply chain security. We focused this work on the five wireless and five wireline network providers with the highest revenue, the eight manufacturers of routers and switches with the highest market shares in the U.S. market, and two cable network providers. To identify the top five U.S. wireless providers by subscribers, we used data from the Department of Defense and verified the subscribership data against investor relations reports from the providers. To identify the top five U.S. wireline providers by subscribers, we used publicly available rankings and verified the subscriber data against investor relations reports from the providers. We selected the top eight manufacturers of routers and switches based on 2010 U.S. market share. We selected two of the top three U.S. cable companies based on 2011 subscriber data. To identify how the federal government is addressing the potential risks of foreign-manufactured equipment that is used in commercial communications networks, we asked federal agencies to identify statutes and regulations to identify potential sources of the federal government’s legal and regulatory authority over how communications network providers ensure the security of their U.S. commercial networks. Additionally, we interviewed and collected documentation from 13 federal entities to identify related federal efforts, such as interagency information sharing initiatives and those with the private sector. To determine other approaches, including those of foreign countries, for addressing the potential risks of using foreign-manufactured equipment in commercial communications networks, we conducted a literature review and interviewed stakeholders. We identified other approaches from governmental entities in Australia, India, and the United Kingdom (UK) that address supply chain risks for commercial communications networks. These countries were chosen to show the variation in how foreign governments are approaching supply chain risk management. We also considered criteria such as the availability of public information on the approach to allow for a detailed review and language considerations. While the results of the data collected from these three countries may not encompass all possible approaches, it provided important insights into the approaches that some countries are using to address supply chain risks for commercial communications networks. We reviewed documents and interviewed officials from governmental entities in Australia, India, and the UK to describe the approaches and issues that could arise from using these approaches. We identified these issues based on interviews with foreign government officials and U.S. industry stakeholders, and our review of foreign proposals and other documentation. The issues identified provide a range of considerations, but is not an exhaustive list of all issues that could be considered. We also assessed the potential for using the CFIUS review process for purchases of foreign-manufactured equipment because a voluntary notification process similar to CFIUS is being discussed by government and industry stakeholders. We reviewed the Foreign Investment and National Security Act of 2007, related regulations, and CFIUS’s annual reports to Congress to describe the CFIUS process. We reviewed CFIUS’s transaction data to describe the number of covered transactions, investigations, and Presidential decisions made from calendar years 2009 to 2011 to provide context. Additionally, we interviewed officials from federal agencies and industry stakeholders on how the commercial communications market in the United States may be affected if any of the identified approaches are used when U.S. communications companies purchase equipment manufactured in foreign countries. We conducted data reliability testing to determine that any data used are suitable for our purposes. We conducted this performance audit from December 2011 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Supply chain threats are present at various phases of the life cycle of communications network equipment. Each of the key threats presented in table 6 could create an unacceptable risk to a communications network. Threat actors can introduce the threats described in appendix II by exploiting vulnerabilities at multiple points in the global supply chain. Table 7 describes examples of the types of vulnerabilities that could be exploited. In examining potential sources of authority related to supply chain security, we focused on DHS, FCC, and Commerce because of their roles in critical infrastructure protection. Homeland Security Presidential Directive 7 (2003) designated DHS as the sector-specific federal agency for the telecommunications critical infrastructure sector. It required DHS to set up appropriate systems, mechanisms, and procedures to share cyber information with other federal agencies and the private sector, among others. The Communications Sector-Specific Plan of the National Infrastructure Protection Plan characterizes FCC and Commerce as partners that have relevant authority and support DHS’s communications critical-infrastructure protection efforts. DHS has not identified specific authorities that would permit it to take action to ensure the security of the supply chain of commercial networks. Officials from DHS’s Office of General Counsel stated that the Homeland Security Act might have applicable authority, although this authority is not specific to the security of the supply chain of commercial networks. DHS further stated that it cannot say what specific authority it might use if it needed to take action because it has not faced a set of circumstances related to a commercial network’s supply chain security requiring action. Officials from FCC’s Office of General Counsel stated that FCC could regulate network providers’ supply chain practices to assure that the public interest, convenience, or necessity are served if circumstances warranted. Specifically, FCC could impose supply chain requirements on providers of common carrier wireline and wireless voice services and, in specific circumstances, information services providers, using FCC’s authority under the Communications Act. Officials stated that FCC has not yet attempted to use these sources of authority to impose regulations specifically designed to address cybersecurity threats. FCC officials stated that because the agency has not adopted regulations or policies related to supply chain security in commercial communications networks, reliance on these sources of authority has not been tested by legal challenges in court. According to FCC officials, legislative changes to the Communications Act to provide express recognition of the agency’s authority to address such threats would reduce the risk of such challenges and may facilitate adoption of supply chain security regulation. FCC officials added that although its current legal authority could allow FCC to act to impose supply chain requirements on network providers, it has not determined the extent to which it has authority to regulate companies that manufacture network equipment. In the past, the agency regulation of equipment manufacturers has focused on interference management. FCC officials told us that they are actively participating in discussions within the executive branch regarding supply side issues, though which agencies should take the lead on this issue has not been determined. Commerce officials stated that Section 232 of the Trade Expansion Act of 1962, as amended, could potentially provide authority for Commerce to use when communications equipment purchases pose a potential risk to national security. According to Commerce documents, Section 232 gives Commerce statutory authority to conduct investigations to determine the effect of imports on national security. If an investigation finds that an import may threaten to impair national security, then the President may use his statutory authority to “adjust imports,” by taking measures recommended by the Secretary of Commerce, including barring imports of a product. Commerce has not used, or attempted to use, this authority for any cases involving the communications sector. Commerce officials stated that they reviewed this authority in 2010 in part because a major network provider was considering purchasing foreign-manufactured communications equipment from a company that the federal government believed might pose a national security threat. Since the network provider decided not to purchase equipment from that company, Commerce did not review the potential applicability of Section 232 to that transaction. In addition to the contact named above, Heather Halliwell, Assistant Director; Derrick Collins; Swati Deo; Anne Doré; Bert Japikse; Sara Ann Moessbauer; Josh Ormond; Amy Rosewarne; and Hai Tran made key contributions to this testimony.
The United States is increasingly reliant on commercial communications networks for matters of national and economic security. These networks, which are primarily owned by the private sector, are highly dependent on equipment manufactured in foreign countries. Certain entities in the federal government view this dependence as an emerging threat that introduces risks to the networks. GAO was requested to review actions taken to respond to security risks from foreign-manufactured equipment. This testimony addresses (1) how network providers and equipment manufacturers help ensure the security of foreign-manufactured equipment used in commercial communications networks, (2) how the federal government is addressing the risks of such equipment, and (3) other approaches for addressing these risks and issues related to these approaches. This is a public version of a sensitive report that GAO issued in May 2013. Information deemed sensitive has been omitted. For the May 2013 report, GAO reviewed laws and regulations and interviewed officials from federal entities with a role in addressing cybersecurity or international trade, the five wireless and five wireline network providers with the highest revenue, and the eight manufacturers of routers and switches with the highest U.S. market shares. GAO obtained documentary and testimonial evidence from governmental entities in Australia, India, and the United Kingdom, because of their actions to protect their networks from supply chain attacks The network providers and equipment manufacturers GAO spoke with reported taking steps in their security plans and procurement processes to ensure the integrity of parts and equipment obtained from foreign sources. Although these companies do not consider foreign-manufactured equipment to be their most pressing security threat, their brand image and profitability depend on providing secure, reliable service. In the absence of industry or government standards on the use of this equipment, companies have adopted a range of voluntary risk-management practices. These practices span the life cycle of equipment and cover areas such as selecting vendors, establishing vendor security requirements, and testing and monitoring equipment. Equipment that is considered critical to the functioning of the network is likely to be subject to more stringent security requirements, according to these companies. In addition to these efforts, companies are collaborating on the development of industry security standards and best practices and participating in information-sharing efforts within industry and with the federal government. The federal government has begun efforts to address the security of the supply chain for commercial networks. In 2013, the President issued an Executive Order to create a framework to reduce cyber risks to critical infrastructure. The National Institute of Standards and Technology (NIST)--a component within the Department of Commerce--is responsible for leading the development of the cybersecurity framework, which is to provide technology-neutral guidance to critical infrastructure owners and operators. NIST published a request for information in which NIST stated it is conducting a comprehensive review to obtain stakeholder input and develop the framework. NIST officials said the extent to which supply chain security of commercial communications networks will be incorporated into the framework is dependent in part on the input it receives from stakeholders. GAO identified other federal efforts that could impact communications supply chain security, but the results of those efforts were considered sensitive. There are a variety of other approaches for addressing the potential risks posed by foreign-manufactured equipment in commercial communications networks, including those approaches taken by foreign governments. For example, the Australian government is considering a proposal to establish a risk-based regulatory framework that requires network providers to be able to demonstrate competent supervision and effective controls over their networks. The government would also have the authority to use enforcement measures to address noncompliance. In the United Kingdom, the government requires network and service providers to manage risks to network security and can impose financial penalties for serious security breaches. While these approaches are intended to improve supply chain security of communications networks, they may also create the potential for trade barriers, additional costs, and constraints on competition, which the federal government would have to take into account if it chose to pursue such approaches.
Farmers are exposed to financial losses because of production risks— droughts, floods, and other natural disasters—as well as variations in the market prices of their crops. Through the federal crop insurance program, participants can insure against losses on more than 100 crops. These crops include five major crops (corn, cotton, grain sorghum, soybeans, and wheat), which accounted for 86 percent of the program premiums in 2013, minor crops (field crops other than major crops and livestock), and specialty crops (fruits, vegetables, nursery crops, and tree nuts). Crop insurance participants may be individuals or legal entities—such as trusts, partnerships, and corporations—and members of an entity may share ownership of an insurance policy. Participants can generally select various types of crop insurance policies, including yield-based plans, which protect against declines in production, and revenue-based plans, which protect against declines in production, price, or both. Some plans, however, are not available for all crops or in all locations. Participants may also choose between two types of coverage: (1) catastrophic coverage, which insures 50 percent of normal yield and 55 percent of the estimated market price of the crop, and (2) additional or “buy-up” coverage, which insures 50 percent to 85 percent of normal yield and up to 100 percent of the estimated market price of the crop. Beginning in 2015, participants have the option of buying insurance policies designed to reimburse “shallow losses,” to cover the portion of losses that is applied toward meeting a plan’s deductible. In addition, participants may choose what type of units (certain number of acres for a specific crop) to insure. Basic units cover all plantings of a crop in a single county with the same tenant and landlord; optional units are basic units divided into smaller units by township section; and enterprise units cover all plantings of a single crop in a county, regardless of the tenant and landlord structure. Enterprise units are generally more geographically diverse, so this type of unit is less risky and is charged a lower premium. The federal government has played an active role in helping to mitigate the effects of production risks on farm income by promoting the use of crop insurance through subsidies of premiums. The federal government’s premium and administrative expense subsidies for crop insurance policies are not payments to participants, but they can be considered a financial benefit to participants. Without a premium subsidy, crop insurance participants would have to pay the full amount of the policy premium. And, without an administrative expense subsidy, premiums would likely be higher because insurance companies would have to reflect the full cost of administering the policies in those premiums. The federal government provides crop insurance premium subsidies in part to achieve high participation and coverage levels. High participation and coverage levels may reduce or eliminate the need for congressionally authorized ad hoc disaster programs to help farmers recover from natural disasters, which can be costly. For example, under three separate ad hoc disaster programs, USDA provided $7 billion in payments to farmers whose crops were damaged or destroyed by natural disasters from 2001 to 2007. In 2012, Congress did not enact ad hoc disaster assistance legislation despite a major drought affecting a large portion of the United States. Congress sets premium subsidy rates, meaning the percentage of the premium paid by the government. Premium subsidy rates vary by the level of insurance coverage, the type of units covered by the policy, and the geographic diversity of crops insured. For most policies, the statutory subsidy rates range from 38 percent to 80 percent of the premiums. On average, premium subsidy rates were 62 percent in 2014 for these policies. The two new shallow loss insurance plans have premium subsidy rates of 80 percent and 65 percent. For catastrophic coverage, the federal government pays 100 percent of premiums, and participants pay a $300 administrative fee for each crop that they insure in each county. Administrative expense subsidies, which are paid to insurance companies, are determined as a percentage of total premiums and vary by policy type. Unlike the crop insurance program, for more than a decade, USDA’s farm and conservation programs have had statutory income limits setting the maximum amount of income that participants can earn and still remain eligible for program payments. Participants subject to the income limits are individuals, entities, and members of entities. The 2008 farm bill set separate limits for an individual’s or a legal entity’s farm income and nonfarm income, and those limits were in effect from 2009 through 2013, The income subject to both but the limits changed in the 2014 farm bill.limits was based on AGI, as defined by the Internal Revenue Service (IRS), or a comparable measure, and averaged over the 3 most recent tax years. These limits varied by program and changed over time but, in general, they specified that participants in farm programs could not receive payments if their nonfarm income exceeded $500,000 or if their farm income exceeded $750,000. Participants in conservation programs generally could not receive benefits if their nonfarm income exceeded $1 million, unless at least two-thirds of their total AGI was farm income. The 2014 farm bill established a single income limit of $900,000 for farm and conservation programs. Appendix II provides additional information about the income limits established under the 2008 farm bill and FSA’s enforcement of these limits. Although the crop insurance program has no income limits for its participants, Congress has considered establishing an income threshold above which participants would receive reduced subsidies. In the Senate- passed version of the 2014 farm bill, crop insurance participants with AGI in excess of $750,000, averaged over 3 years, would have had their premium subsidies reduced by 15 percentage points. Implementation of this provision would have been contingent on the results of a study on the limitation’s effects. that supported the provision in the Senate-passed version of the farm bill, but the provision was not included in the final version of the farm bill. Also, in the House of Representatives, an amendment to its version of the farm bill was proposed that would have eliminated premium subsidies for participants with average AGI exceeding $250,000, but the amendment was defeated. The provision would not have reduced subsidies for catastrophic coverage. S. 954, 113th Cong. § 11033, (as passed by Senate, June 10, 2012). to be eligible for premium subsidies, crop insurance participants that plant certain crops on land that is prone to erosion must have a conservation plan, and participants must not convert wetlands for crop production. About 1 percent of crop insurance participants would have been affected if subsidies were reduced for the highest income participants. These participants had some characteristics that differed from other crop insurance participants but overall had characteristics similar to other participants. Specifically, the highest income participants insured more farmland and were provided more in premium subsidies than other participants, on average. In general, however, the highest income crop insurance participants and other participants insured farmland in the same states, insured major crops most frequently, and made similar choices about insurance protection. About 1 percent of crop insurance participants that also applied for farm and conservation programs with income limits would have been affected if subsidies had been reduced for the highest income participants from 2009 through 2013, based on our analysis of RMA and FSA data. The number of highest income crop insurance participants was about 7,500 annually on average but, as shown in table 1, the annual number decreased from 2009 through 2013. An FSA official told us that this decrease in recent years may be the result of fewer crop insurance participants applying for farm and conservation programs after they had been determined ineligible for these programs’ payments because of their income. As a result, this analysis may understate the annual number of highest income crop insurance participants. In terms of premiums, the highest income participants accounted for about 1 percent of the premiums annually, on average, from 2009 through 2013. Our analysis does not include all crop insurance participants because we relied on FSA data to determine whether they exceeded income limits, and FSA only had data on those that also participated in farm and conservation programs. Our analysis included about 66 percent of crop insurance participants, which accounted for about 73 percent of premiums. Nevertheless, results from USDA’s annual survey of a sample of all U.S. farms confirm that less than 1 percent of crop insurance participants would have been affected from 2009 through 2012, the most recent year for which survey data were available. Our analysis also does not include data from 2014, which were not available when we conducted our review. The number of participants affected would have been smaller if the $900,000 income limit that went into effect for farm programs in 2014 had applied to crop insurance participants. According to preliminary FSA data, fewer than one-half of 1 percent of farm program participants were found to exceed this limit in 2014. The highest income participants insured more farmland and had more premium subsidies provided on their behalf than other participants from 2009 through 2013. The highest income participants each insured about 490 acres of farmland on average, compared with about 310 acres insured by the other participants. The highest income participants were also associated with larger farms compared with other participants. On average, the highest income participants were associated with policies insuring about 2,920 acres, while other participants were associated with policies insuring about 1,330 acres.also had more premium subsidies provided on their behalf than other participants. Specifically, each of the highest income participants had an average of about $8,500 in premium subsidies provided on their behalf each year, while other participants had an average of about $7,480 each year. Premiums, and hence premium subsidies, are based on the value of the insured crops, and would be greater if more acres were insured and the crop values were higher. In some cases, the highest income participants insured considerably more acres and had considerably more than the average amount of premium subsidies provided on their behalf. Some examples we identified from USDA data and our analysis of the The highest income participants highest income crop insurance participants from 2009 through 2013 included the following: One of the participants insured an average of more than 150,000 acres annually in multiple states. This participant grew major, minor, and specialty crops, and operated livestock farms and other business About $6.1 million in premium subsidies were provided enterprises.on behalf of this participant, and the participant also collected about $4.0 million in claims payments during the 5-year period. The participants with the 10 highest dollar amounts in premium subsidies each insured an average of about 39,000 acres, had an average of about $2.6 million in premium subsidies provided on their behalf, and collected about $2.5 million in claims payments during the 5-year period. Some of the highest income participants received income from operating large farms, but others received some of their income from nonfarming sources, according to our analysis. For example, more than 70 of the crop insurance participants we identified as among the highest income during 1 or more years from 2009 through 2013 were managers or professionals, including attorneys, executives, or physicians. Four others, who had net worth over $1.5 billion each in 2013, earned their wealth from a variety of sources in addition to farming, such as mining, real estate, sports, and information technology, according to publicly available information. Those participants each insured an average of about 18,200 acres, had approximately $118,400 in premium subsidies provided on their behalf, and collected about $38,300 in claims payments during the 5-year period. Further, participants that operated farms with higher annual gross sales ($250,000 or more) were more likely to have employment in nonfarm professions with higher wages, according to a USDA study. About half of the highest income participants and 38 percent, on average, of the other participants in the crop insurance program reported an address in five states (Texas, Kansas, Illinois, Iowa, and California), according to our analysis of USDA data from 2009 through 2013. The highest income participants made up an average of about 1 percent of crop insurance participants in three of these five states, as shown in table 2, similar to the share of highest income crop insurance participants nationwide. Of these five states, California had the largest percentage of highest income participants in the state. In terms of premiums, the highest income participants accounted for 1 percent of the premiums in three of these five states, similar to the highest income crop insurance participants’ share of premiums nationwide. They accounted for about 11 percent of the premiums in California and 2 percent of the premiums in Texas. The higher share of premiums in California may be partially the result of the type of crops grown there. Specifically, specialty crops are commonly grown in California, and such crops are often higher value and associated with higher premiums. In Texas, FSA officials said there may be additional sources of revenue for landowners who farm, such as revenue from oil and gas development on their land. Appendix III contains a complete list of the numbers and percentages of the highest income participants in each state. We also identified more than 20 crop insurance participants among the highest income in 1 or more years from 2009 through 2013 that had foreign residences such as in Canada and France. The highest income participants insured major crops most frequently but were more likely than other participants to insure minor and specialty crops and receive some income from livestock. The highest income and other participants in the crop insurance program both insured major crops most frequently, but fewer of the highest income participants did so than other participants. As shown in figure 1, major crops accounted for about 64 percent of the premiums of the highest income participants but 90 percent on average of the other participants’ premiums. The highest income participants insured minor and specialty crops more frequently and, among those crops, potatoes had the largest share of premiums. Potatoes made up about 8 percent of the highest income participants’ premiums and about 1 percent of the other participants’ premiums. According to USDA’s analysis of an annual survey of U.S. farms from 2009 through 2012, the highest income participants were more likely than other participants to receive income from livestock. Specifically, an average of 65 percent of the highest income participants received some income from livestock, compared with 57 percent of other participants. In selecting insurance plans, a majority of the highest income and other participants both chose revenue plans, rather than yield plans from 2009 through 2013, but a smaller percentage of highest income participants picked revenue plans. Revenue plans, which protect farmers against crop revenue loss from declines in production or price, are the most popular plan type. Revenue plans accounted for an average of about 58 percent of the highest income participants’ premiums and 82 percent of the premiums of other participants. One reason the highest income participants may have chosen revenue plans less often than other participants was because they insured minor crops and specialty crops more frequently, based on our analysis of USDA data, and not all those crops are eligible for revenue plans, according to RMA documents. For major crops only, revenue plans accounted for nearly the same percentage of the highest income participants’ and other participants’ premiums (about 88 and 90 percent, respectively), according to our analysis. In selecting coverage levels, a majority of the highest income and other participants chose to insure 65 to 75 percent of the expected value of their crops from 2009 through 2013.chose catastrophic coverage and coverage levels lower than 65 percent more often than other participants. They were less likely to choose coverage levels higher than 75 percent than other participants. This may be because the highest income participants insured specialty crops more frequently, and these crops are more likely to be irrigated, which reduces the likelihood of losses due to drought, according to academic and industry publications. In selecting crop insurance units, both the highest income and other participants chose optional units more often than basic or enterprise units from 2009 through 2013. Specifically, optional units accounted for 45 percent of the highest income participants’ premiums and 43 percent of the other participants’ premiums. Crop insurance participants using optional units have a higher probability of claiming losses because these units are associated with less geographic diversity than basic units. Enterprise units accounted for 30 percent of the premiums of the highest income participants and 39 percent of the premiums of the other participants. In general, enterprise units are regarded as less risky because compared with basic or optional units they include more land and so reflect more geographic diversity. Appendix IV contains additional information on the characteristics of crop insurance participants. If crop insurance subsidies had been reduced for participants with the highest incomes from 2009 through 2013, the crop insurance program, including its actuarial soundness, would not likely be affected, according to our analysis of FSA and RMA data. In addition, the government would have saved tens of millions of dollars over the 5-year period. The savings would have been greater or smaller if other factors changed, such as participants’ choices about insurance protection, crop prices, participants’ income, or policy provisions. RMA is directed by law to adopt rates and coverages that will improve the actuarial soundness of the crop insurance program. For the federal crop insurance program, actuarial soundness means that the amount expected to be paid for claims is not greater than the portion of premiums collected that are designated to cover anticipated losses and a reasonable reserve.participation, according to its fiscal years 2011 to 2015 strategic plan. In addition, one of RMA’s goals is to continue to expand We determined that if Congress enacted statutory provisions to reduce premium subsidies for the highest income participants, it would most likely not affect the actuarial soundness or viability of the program because, based on our analysis of FSA and RMA data, the highest income participants (1) do not represent a lower risk to the program than participants in the remaining pool, (2) would be unlikely to leave the program, and (3) represent only about 1 percent of all participants and premiums in the program. First, our analysis of several measures that reflect risk indicates that the highest income participants do not represent a lower risk to the program at the national level than do other crop insurance participants. One measure that reflects risk— the average ratio of claims payments to total premiums, known as the loss ratio—was 0.84 for the highest income participants and 0.82 for other participants, from 2004 through 2013, suggesting that premiums were commensurate with claims payments, regardless of the income level of the participants. Another measure that reflects risk─the loss cost ratio, which is a measure of claims payments per unit of liability─was lower for the highest income participants than for other participants. However, according to our analysis, the difference could be explained by the participants’ choices in insurance plans, suggesting that the highest income participants do not represent a lower risk to the program. Specifically, from 2004 through 2013, the average loss cost ratio was about 6.3 percent for the highest income participants and 8.5 percent for other participants. The lower loss cost ratio for the highest income participants reflects, in part, that they chose yield, rather than revenue, insurance more often than did other participants. With yield insurance, which covers losses resulting from declines in production, participants have a lower likelihood of making a claim than with revenue insurance. Revenue insurance, which covers losses resulting from declines in production, price, or both, was picked more frequently by other participants. The highest income participants also chose lower coverage levels, including catastrophic coverage, more often than did other participants and, with lower coverage levels participants are less likely to make claims under crop insurance policies. One other measure that reflects risk, the premium rate, was about 7.5 percent charged to highest income participants compared with 10.5 percent charged to other participants. As with the loss cost ratio, this difference is in part a reflection of participants’ choices in insurance plans. Also, the lower premium rate for the highest income participants corresponds to their lower likelihood of filing claims (which results in part from their choices in insurance plans), so the portion of premiums designated for losses for the highest income participants nationwide would not be likely to surpass the amount of money needed to cover their claims. Table 3 summarizes data on loss ratio, loss cost ratio, and premium rates for the highest income and other participants from 2004 through 2013. Second, we determined that the highest income participants would be unlikely to leave the program in response to a reduction in subsidies. A reduction in subsidies would require participants to pay more of their premiums, but the effect on their overall costs would be limited because, as we found in August 2014, premium subsidies generally represent a Given their income small fraction of average production costs per acre.levels, participants in the highest income category would likely be able to afford this small increase in costs. Also, academic literature and government information suggest that participants would not likely leave the program because of their heavy reliance on crop insurance and the increasing importance of crop insurance. Further, several incentives encourage participants to retain crop insurance, such as some lenders’ requirement that farmers have crop insurance in order to obtain loans. Rather than leaving the program in response to a reduction in subsidies, it is more likely that participants would select lower levels of policy coverage than they currently have, according to an RMA analysis. Third, if all of the highest income participants left the crop insurance program, the actuarial soundness of the program would not likely be affected because the highest income participants represent only about 1 percent of all participants and about 1 percent of premiums in the program. In addition, since their premiums generally correspond to their likelihood of collecting claims payments, their decisions to stay in or leave the program would not affect its actuarial soundness at the national level. Consequently, RMA would not generally need to raise premium rates for participants remaining in the pool. If crop insurance premium subsidies had been reduced by 15 percentage points for the highest income participants that applied to farm and conservation programs with income limits each year from 2009 through 2013, the federal government would have saved more than $70 million over the 5-year period, according to our analysis of FSA and RMA data. If premium subsidies had been eliminated altogether for this group of highest income participants, the federal government would have saved about $290 million over the 5-year period. However, these estimates may understate what the actual savings would have been because, as mentioned earlier, our analysis does not cover all crop insurance participants. For example, our analysis does not include participants that decided not to apply for farm and conservation programs after they realized their incomes were too high but did participate in the crop Furthermore, the crop insurance program is insurance program.expanding with the new shallow loss programs under the 2014 farm bill, and savings would be higher if these programs were subject to a subsidy reduction for the highest income participants. The savings estimate we discuss in this report is one of several such estimates we have calculated in reports on the crop insurance program; these estimates are summarized in appendix V. Other factors, such as participants’ choices about insurance protection, could also affect the amount of savings. For example, if some of the highest income participants selected less expensive insurance plans or lower coverage levels, or if they left the program in response to a reduction in subsidies, the potential savings would be greater because the total amount of federal premium subsidies would decrease. Participants’ decisions could be influenced by multiple factors, including the availability of other risk management tools to protect against crop and revenue losses. For example, some risk management tools—such as forward contracts that lock in a price to be paid on a future date—are not generally available for all crops. Other risk management tools for participants include producing a diverse range of crops and livestock, working in off-farm occupations, or accruing enough savings to self- insure, according to some agricultural economists. In addition to participants’ choices, several other factors could influence federal government savings, such as crop prices, participants’ income, or policy provisions. If crop prices changed, savings could be smaller or larger because premiums are affected by crop prices and, as the value of the crops being insured goes up or down, so do crop insurance premiums. Since premium subsidies are a set percentage of the premiums, these subsidy amounts would rise or fall along with premium amounts. If participants’ incomes changed, the number of participants with incomes exceeding a given threshold could also change, affecting the amount of federal government savings. Policy provisions could also influence savings by specifying an income threshold or reduction in subsidies that differs from the ones used in our analysis. For example, the $900,000 income limit for individuals that went into effect for farm programs in 2014 affected less than one-half of 1 percent of farm program participants, according to preliminary FSA data. If this limit applied to crop insurance participants, and one-half of 1 percent of these participants had their premium subsidies reduced by 15 percentage points, assuming other factors did not change, the federal government would save about $35 million over 5 years. USDA could use existing procedures without adding requirements for a majority of crop insurance participants if a statutory provision were enacted directing USDA to reduce premium subsidies for the highest income participants. According to FSA officials, FSA has existing procedures to administer income limits for farm and conservation programs that could be used to identify the highest income crop insurance participants if such a provision were enacted. According to RMA officials, even with information from FSA, RMA and the insurance companies could face some challenges in administering a provision that would reduce premium subsidies for the highest income participants. However, RMA has procedures in place or under development that may help administer a premium subsidy reduction for the highest income participants. FSA, in cooperation with the IRS, has existing procedures to verify participants’ compliance with income limits applicable to farm and conservation programs. FSA officials told us that these procedures could be used to identify the highest income participants in the crop insurance program, if required. As we reported in August 2013, FSA and the IRS implemented an income verification process in 2009. As part of this process, applicants certify whether their income is above or below the limits and provide consent for the IRS to disclose certain tax-related information to FSA. Entities that participate in farm and conservation programs identify their members and the percentage share they comprise in the entity because individuals, entities, and all members of those entities are subject to income limits. FSA also verifies compliance with the income limits for applicants that only participate in NRCS’s conservation programs. NRCS accesses FSA’s eligibility data system—used to document whether applicants comply with requirements including income limits and are eligible for program benefits—to determine applicants’ compliance with income limits. FSA has existing procedures to safeguard the privacy and confidentiality of applicants’ income information, according to agency documents. Appendix II contains additional information on the procedures that FSA uses to administer income limits for farm and conservation programs. If premium subsidies were reduced for the highest income crop insurance participants, a majority of crop insurance participants would not need to provide additional information to FSA, according to our analysis of agency data from 2009 through 2013. About two-thirds of crop insurance participants, on average, also participated in farm and conservation programs that have income limits. In order to be eligible for these programs, participants complete forms certifying their compliance with the limits. This information could be used for the crop insurance program if a similar provision were enacted. The approximately one-third of crop insurance participants that do not already provide information to FSA would need to complete a form certifying that their income was below the limits and authorizing FSA to verify this information. Entities also would need to provide FSA with information about their entity structure and their members if they do not already provide that information. As we found, in September 2013, participants in certain farm programs have had to submit this information and update it as needed. FSA is currently responsible for determining whether participants have incomes exceeding the limits for both FSA and NRCS programs. FSA officials told us that they could also make these determinations for crop insurance participants that are not participating in farm and conservation programs, if needed. If premium subsidies were reduced for the highest income crop insurance participants, there are opportunities for RMA to work with FSA to obtain access to FSA’s eligibility data system. This would allow RMA to identify crop insurance participants with the highest income. Administering the reduction of premium subsidies would involve informing crop insurance participants and insurance companies of the requirements, including when participants need to certify their income and provide other needed information to FSA, and calculating the appropriate premium subsidy amount for each crop insurance participant. RMA officials told us that administering a provision that would reduce premium subsidies for the highest income participants would pose some challenges, but these could be addressed through discussions with FSA and the insurance companies. For example, RMA and FSA would need to reconcile their data on entities because members of entities—which are subject to income limits—may be reported differently for crop insurance and farm and conservation programs, according to RMA officials. Additionally, RMA officials said crop insurance participants’ income status would need to be known in advance of the application for or renewal of crop insurance policies, to allow insurance companies to quote accurate premiums and participants to make informed decisions about their insurance protection. RMA has existing procedures to administer the eligibility requirements of the crop insurance program and to reduce benefits, including premium subsidies, under certain conditions. Some of these procedures may be similar to those that would be needed to reduce premium subsidies for the highest income participants. For example, RMA’s regulations and guidance direct insurance companies to proportionally or fully reduce coverage in policies where some or all members of an entity are ineligible for crop insurance. In addition, RMA revised its procedures to comply with a modification in the 2014 farm bill that calls for reducing program benefits, including premium subsidies, for some crop insurance participants that newly till land in certain states.insurance companies are responsible for reporting when a crop insurance participant tills land covered in the provision, according to RMA officials. RMA, FSA, and NRCS are also developing procedures to administer the conservation compliance requirements in the 2014 farm bill that may help administer premium subsidy reductions for the highest income crop insurance participants. Agency officials told us that they expect to promulgate program rules and issue guidance for implementation in 2015. The 2014 farm bill expanded conservation compliance requirements, applicable to farm program payments since 1985, to crop insurance premium subsidies, that had been excluded from the requirement since 1996. Under the 2014 farm bill, participants are prohibited from receiving premium subsidies if they produce agricultural commodities on land that is prone to erosion without implementing an approved conservation plan or obtaining an exemption or if they convert a wetland to grow agricultural commodities. All crop insurance participants must certify their compliance with conservation requirements by submitting a one-time form to FSA. Some participants may also need to take additional steps, such as developing and implementing a conservation plan that has been reviewed and approved by the NRCS. To administer these requirements, FSA and RMA officials said that they are currently expanding their information sharing capabilities. For example, FSA and RMA officials told us that they expect RMA will have access to FSA’s eligibility data system. The federal crop insurance program plays a critical role in helping participants manage the risk that is inherent in farming. The federal government has promoted the use of crop insurance through premium subsidies in part to achieve high participation and coverage levels. However, as budgetary pressures persist, it is crucial that federal resources are targeted as effectively as possible. Reducing premium subsidies for the highest income crop insurance participants presents an opportunity to save millions of taxpayer dollars with minimal effect on participants and the program. From 2009 through 2013, if the income thresholds in effect for farm and conservation programs had applied to crop insurance, we estimate that about 1 percent of crop insurance participants would have exceeded the thresholds and had their subsidies reduced. These participants would still have access to crop insurance and, given their income level, they would be able to afford the higher premiums if their subsidies were reduced. Further, reducing subsidies for the highest income participants would not likely affect the program’s actuarial soundness or viability. USDA has existing procedures and some under development that would help it implement a reduction in premium subsidies for the highest income participants. To reduce the cost of the crop insurance program and achieve budgetary savings for deficit reduction or other purposes, Congress should consider reducing premium subsidies for the highest income participants. We provided a draft of this report for review and comment to USDA. In its written comments, reproduced in appendix VI, USDA said it had no comment on the draft report. We are sending copies of this report to the appropriate congressional committees; the Secretary of Agriculture; the Director, Office of Management and Budget; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Our objectives were to determine, if premium subsidies were reduced for participants with the highest incomes, (1) the percentage and characteristics of participants that would be affected; (2) the impact, if any, on the crop insurance program; and (3) how the U.S. Department of Agriculture (USDA) could implement a reduction in premium subsidies for the highest income participants. To address the first objective, we matched Risk Management Agency (RMA) data on crop insurance participants, including individuals, entities, and members of entities, and the Farm Service Agency (FSA) data on farm and conservation programs’ participants from 2009 through 2013. We used the FSA data because the agency had data on participants’ compliance with income limits for farm and conservation programs. We chose this time frame because FSA had implemented procedures to verify the income of program participants starting in 2009, and 2013 was the most recent year available. We identified crop insurance participants that were in both the RMA and FSA datasets, either directly or through an entity, to determine whether they exceeded income limits in effect for farm and conservation programs. For this group, which included about two-thirds of crop insurance participants, we determined the percentage of participants whose incomes exceeded limits in the Food, Conservation, and Energy Act of 2008. Specifically, these limits included, depending on the program, average adjusted gross farm income of $750,000; average adjusted gross nonfarm income of $500,000; or average adjusted gross nonfarm income of $1 million, unless at least two-thirds of the average adjusted gross income was average adjusted gross farm income. For 2012 and 2013 only, there was an additional limit of average adjusted gross income of $1 million, including both farm income and nonfarm income, applied for certain farm payments. We identified the number of participants that FSA determined to be ineligible because their incomes exceeded statutory limits, and we considered those the “highest income participants.” We did not determine, for each statutory limit, the number of participants with incomes exceeding it because some participants were subject to multiple income limits, and FSA data did not always specify which limit or limits had been exceeded by a given participant. We included in our estimates of the number of highest income participants those that had catastrophic coverage policies. We also used a second analytical approach in which we assumed participants that exceeded income limits in at least 1 year, exceeded the limits during all 5 years. This approach allowed us to include some of the highest income participants that may have left farm and conservation programs because they were identified as exceeding income limits. We considered this to be an upper estimate because some of these participants may have left for other reasons. We analyzed RMA data to identify the characteristics of these and other participants for which we had income information, including the states listed on their policies, the crops they insured, and the insurance plans and coverage they selected. Some crop insurance participants had shares in multiple policies in more than one state. In those cases, when determining the number and percentage of participants in each state, we used the state on the crop insurance policy closest to the address in FSA’s records. The address in the FSA records is generally the participant’s residence or business address, according to an FSA official. We used premiums as the basis of our analysis for the crops and insurance plans and coverage, and we assigned the dollars proportionally based on the share of the policy or policies insured by the participants. For example, if a policy had two individuals listed as policyholders, we assigned 50 percent of the premium for that policy to each one. If a single individual had shares in multiple policies, we added up his or her shares to determine the total premiums attributed to that individual. Unless otherwise indicated, the data we report are based on crop years. We used additional sources of information to corroborate our analysis of RMA and FSA data, including USDA survey data, agency documents and reports, information from other sources such as state and state university reports and company websites, and interviews with USDA officials. Because about one-third of crop insurance participants did not participate in farm and conservation programs, we did not have FSA data on their income. To learn about the income and characteristics of the entire population of crop insurance participants, we therefore analyzed USDA survey data of a sample of U.S. farm operations from 2009 through 2012, the latest year available. Specifically, we reviewed data for U.S. farm operations that had crop insurance expenditures. Of these farm operations, we compared operations that reported exceeding any of the income limits in effect for farm and conservation programs with those that reported exceeding none of them. We analyzed RMA program information such as RMA’s summary of business reports and crop policy provisions to determine the extent to which different insurance options were available for certain crops. For illustrative examples of the highest income crop insurance participants, we used publicly available sources of information such as company web sites. For example, we used information from the websites of companies to identify the professions of the highest income crop insurance participants. In addition, we interviewed FSA, RMA, and Economic Research Service officials regarding the number and characteristics of the highest income participants and other participants in the crop insurance program. To address the second objective, we reviewed RMA’s authorizing legislation and analyzed RMA data to determine the effects, if any, on the actuarial soundness of the program if premium subsidies were reduced for the highest income crop insurance participants and savings to the federal government. To calculate the effect, if any, on the actuarial soundness of the crop insurance program, we analyzed the value of the crops insured, loss experiences of, and premiums provided on behalf of (1) the highest income participants and (2) other participants, from 2004 through 2013. Specifically, we analyzed data on three measures that reflect risk (loss ratio, loss cost ratio, and premium rate) to determine whether the highest income participants represented a lower risk to the program than other participants. We reviewed USDA and other studies and interviewed agency officials, academics, and actuarial professionals to consider whether the highest income participants would be likely to leave the crop insurance program if their subsidies were reduced, and we used our findings about the percentage of crop insurance participants who would be affected to assess the potential effects on the program if the highest income participants did leave. We chose the 10-year time frame to capture the effects of factors that can change from year to year, such as crop prices, and others that are infrequent, such as extreme weather. There are trade-offs in choosing the number of years of data to examine. A group of actuarial experts told us that using 5 years of data is not enough to cover the weather cycle, while using older data is less relevant because the crop insurance program has changed, and that at least 10 years of data are needed. Also, RMA, in its most recent study of its methodology for setting premium rates in 2010, found that its methodology was sound but concluded that the agency should place more weight on loss experience from more recent years to better account for current risks faced by farmers. Because we did not have complete income information for 10 years, we assumed that participants that had incomes that exceeded the limits in 1 or more years from 2009 through 2013 were highest income for the entire period. This method assumes that any participant identified as highest income from 2009 through 2013 was highest income from 2004 through 2013. This does not take into account that some of these participants may not have been highest income in each of those years. Also, there may be participants that were not identified as highest income in 2009 through 2013 but that were highest income from 2004 through 2008. For these estimates, the data we report are based on crop years. In addition, we reviewed USDA studies, our prior reports, and other studies. We also interviewed RMA officials, academics, and actuarial professionals regarding the costs of the crop insurance program and the potential effects on the actuarial soundness and participation in crop insurance if premium subsidies were reduced for the highest income participants. To calculate the potential government savings if premium subsidies were reduced for the highest income participants, we analyzed RMA and FSA data to estimate the amount of subsidies paid on behalf of participants with incomes that exceeded the limits from 2009 through 2013, and we calculated the savings that would have resulted (excluding catastrophic policies) if these subsidies were reduced by 15 percentage points or eliminated. These calculations were consistent with proposals raised during the 2014 farm bill debate.have resulted if the subsidies were eliminated to provide an upper estimate for the potential savings. We chose the 5-year time period because recent years more closely reflect current program provisions and participation levels. We estimated the savings that would To address the third objective, we reviewed USDA documents and our prior reports to determine how USDA could administer a provision that would reduce premium subsidies for the highest income crop insurance participants. We reviewed the Agricultural Act of 2014, USDA regulations and guidance, and we interviewed RMA, FSA, and NRCS officials to determine how the agencies are implementing conservation compliance for crop insurance and to obtain an update from FSA on how it is administering income limits for farm and conservation programs. We reviewed industry and academic publications and testimonies to identify challenges that may be posed by administering a provision that would reduce premium subsidies for the highest income participants. We also interviewed RMA and FSA officials regarding the potential feasibility of administering such a provision and potential challenges. For the data used in our analyses, we generally reviewed agency documentation, such as guidance, handbooks, and reports related to the data systems, interviewed knowledgeable officials, and reviewed applicable internal controls information to evaluate the reliability of these data. In each case, we concluded that the data were sufficiently reliable for the purposes of this report. We conducted this performance audit from December 2013 to March 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Food, Conservation, and Energy Act of 2008 (2008 farm bill), modified eligibility rules for many farm and conservation programs, including setting separate income limits for an individual’s or legal entity’s farm income and nonfarm income. In October 2008, we recommended that the Farm Service Agency (FSA) work with the Internal Revenue Service (IRS) to develop a system for verifying income eligibility for all recipients of farm payments. FSA, in cooperation with IRS, implemented procedures for verifying whether farm and conservation program participants’ incomes exceeded statutory limits starting in 2009. In 2014, FSA made changes to incorporate income limits applicable to farm and conservation programs in the Agricultural Act of 2014 (2014 farm bill), and made other adjustments to its procedures. Under the 2008 farm bill, income limits for farm and conservation programs were based on adjusted gross income (AGI) limits averaged over the 3 most recent tax years. Specifically, participants were not eligible to receive some farm payments if their average adjusted gross nonfarm income exceeded $500,000; another type of farm payment if their average adjusted gross farm income exceeded $750,000; and conservation payments if their average adjusted gross nonfarm income exceeded $1 million, unless at least 66.66 percent of their average AGI was average adjusted gross farm income. Further, for 2012 and 2013 only, a $1 million average limit on total AGI, both farm and nonfarm, applied for certain farm payments. Because these income limits applied to individuals, under certain conditions, a married couple could collectively earn up to $2 million in average AGI and be eligible for certain farm payments in 2012 and 2013. The 2008 farm bill also allowed the U.S. Department of Agriculture (USDA) to waive the income limit for conservation payments in cases involving environmentally sensitive land of special significance. FSA developed procedures to apply the income limits to program participants, as we found in August 2013. Starting in 2009, all applicants to farm and conservation programs have had to both (1) certify their compliance with income limits and (2) provide written consent for the IRS to release certain information to FSA to verify their income. In 2009 and 2010, participants provided the certification and consent in two separate forms; starting in 2011, they could use a single form. Participants that chose not to submit a consent form were ineligible for farm and conservation programs subject to income limits and had to refund all payments received under these programs. For participants that provided consent, IRS used its tax database to estimate farm income and nonfarm income according to USDA instructions. IRS computer programs compared these income estimates against the 2008 farm bill’s income limits to identify participants that may have exceeded these limits, and IRS provided the resulting list to FSA. FSA then notified potentially ineligible participants to give them the opportunity to provide documentation, such as tax returns, if they believed their income did not exceed the eligibility limits. FSA state offices were to review the information provided and determine whether participants had income exceeding the limits. FSA also deemed participants to be noncompliant with the limits if they (1) provided an acknowledgment that their incomes exceeded the limits or (2) did not respond at all. FSA state offices informed their state-level Natural Resources Conservation Service (NRCS) counterparts of participants that were determined to have exceeded income limits for conservation programs, so that NRCS could recover any overpayments made to participants in its programs. Under the 2008 farm bill, FSA also established procedures to apply income limits to entities, members of entities, and couples who filed joint returns, according to FSA’s regulations and handbook on payment eligibility, payment limitations, and average AGI. Entities had to provide a form including information about the entity, its members, and the percentage ownership share of each member, and update it as needed. FSA required this information to verify entities’ compliance with provisions other than income limits that are applicable to farm programs. Compliance with income limits was tracked through four levels of legal entity ownership. If some individuals or entities within the four levels did not comply with the income limits, payments were reduced by an amount commensurate with the ineligible share. For married couples who filed joint tax returns, FSA considered the joint income levels to make eligibility determinations, unless a certified public accountant or attorney provided a statement of what each individual’s income would have been had the couple filed separate tax returns. FSA is revising its procedures to incorporate the income limit enacted in the 2014 farm bill and to help improve its operation, although the procedures established to implement the limits under the 2008 farm bill will generally remain in place. FSA is updating its forms, handbook, and eligibility data system to reflect the revised procedures. These changes were made because the 2014 farm bill now includes an average AGI limit of $900,000, calculated over the 3 most recent tax years, rather than multiple limits, and makes no distinction between farm and nonfarm income. According to FSA officials, this limit is expected to simplify the administration of income limits. FSA is also making changes aimed at improving its operation. For example, starting in December 2014, FSA has announced that it has largely automated its process for ensuring it has certification and consent forms on file for all participants subject to income limits. States had varying numbers and percentages of the highest income crop insurance participants of all crop insurance participants that applied to farm and conservation programs with income limits, according to our analysis of agency data from 2009 through 2013. About half of the highest income participants reported an address in five states: Texas, Kansas, Illinois, Iowa, and California. We calculated the percentage of the highest income participants in each state of all crop insurance participants in that state, based on (1) the number of participants and (2) premiums. About 1 percent of crop insurance participants were highest income, on average. The percentage of highest income participants ranged from 0.4 percent through 6.1 percent in each state. About 1 percent of the crop insurance participants’ premiums were attributed to the highest income participants, on average. The percentage of highest income participants’ premiums ranged from 0.3 percent through 13.9 percent in each state. Table 4 shows the average number and percentages of the highest income crop insurance participants by state, listed in order from highest to lowest average numbers from 2009 through 2013. The tables below provide information on crop insurance participants by income level. The tables provide information on the number, percentage, and selected characteristics of the highest income and other crop insurance participants that applied to farm and conservation programs with income limits, as well as insurance protection choices for the highest income participants and other participants by crop. Table 5 shows the average annual number and percentages of the crop insurance participants that were highest income and other participants by number of participants, premiums, and value of insured crops. The table shows that the percentage of crop insurance participants that were highest income is about 1 percent, regardless of the measure used. Table 6 shows selected characteristics per participant. The table shows that the highest income participants had higher acres, premium subsidies, claims, and value of insured crops per participant, on average, than other participants. Table 7 shows the insurance protection choices by crop insured for the highest income and other participants. The table shows that both the highest income participants and other participants varied in their choices depending on the crops they insured. From 2012 through 2015, in addition to this report, we issued three other reports that identified potential actions that could be taken by Congress or the Risk Management Agency to reduce the cost of the crop insurance program and achieve budgetary savings. Table 8 shows the reports, potential government actions we reviewed, and estimated federal dollar savings associated with each potential action, at the time we issued these reports. In addition to the individual named above, Susan Offutt (Chief Economist), Frank Todisco (Chief Actuary), Thomas Cook (Assistant Director), Cheryl Arvidson, Kevin Bray, Christine Feehan, Michael Kendix, Anne Rhodes-Kline, and Ruth Solomon made key contributions to this report. Crop Insurance: In Areas with Higher Crop Production Risks, Costs Are Greater and Premiums May Not Cover Expected Losses. GAO-15-215. Washington, D.C.: February 9, 2015. Crop Insurance: Considerations in Reducing Federal Premium Subsidies. GAO-14-700. Washington, D.C.: August 8, 2014. Farm Programs: Changes Are Needed to Eligibility Requirements for Being Actively Involved in Farming. GAO-13-781. Washington, D.C.: September 26, 2013. Farm Programs: Additional Steps Needed to Help Prevent Payments to Participants Whose Incomes Exceed Limits. GAO-13-741. Washington, D.C.: August 29, 2013. 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-13-279SP. Washington, D.C.: April 9, 2013. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. Crop Insurance: Savings Would Result from Program Changes and Greater Use of Data Mining. GAO-12-256. Washington, D.C.: March 13, 2012. Crop Insurance: Opportunities Exist to Reduce the Costs of Administering the Program. GAO-09-445. Washington, D.C.: April 29, 2009. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-944T. Washington, D.C.: June 7, 2007. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable. GAO-07-819T. Washington, D.C.: May 3, 2007. Suggested Areas for Oversight for the 110th Congress. GAO-07-235R. Washington, D.C.: November 17, 2006.
The federally subsidized crop insurance program helps about 1 million participants manage the risk inherent in farming. In recent years, the government's costs for the crop insurance program have increased substantially, and these costs have come under scrutiny as the nation's budgetary pressures have been increasing. Unlike farm and conservation programs, the crop insurance program provides the same level of subsidies to participants regardless of their income. GAO was asked to examine the potential effects of reducing premium subsidies for the highest income crop insurance participants. This report examines: (1) the percentage and characteristics of participants that would be affected; (2) the impact, if any, on the crop insurance program; and (3) how USDA could implement a reduction in premium subsidies for the highest income participants. GAO analyzed RMA crop insurance data and FSA data on compliance with income limits from 2009 through 2013 (most recent year of available data), analyzed RMA data to examine the impact on the program and calculate potential savings, reviewed agency guidance and industry and academic publications, and interviewed USDA officials and stakeholders. About 1 percent of crop insurance participants would have been affected if premium subsidies had been reduced for the highest income participants from 2009 through 2013, based on GAO's analysis of data from the U.S. Department of Agriculture's (USDA) Risk Management Agency (RMA) and Farm Service Agency (FSA). The highest income participants were those with incomes that exceeded limits in place for farm and conservation programs. In terms of characteristics, the highest income participants insured more farmland and had more premium subsidies provided on their behalf than other participants from 2009 through 2013. However, all crop insurance participants generally insured major crops, such as corn, soybeans, and wheat, while the highest income participants were more likely to insure specialty crops such as fruits, vegetables, and nursery crops. The highest income participants also made similar choices as other participants in terms of the type of crop insurance and the levels of coverage they chose. Reducing crop insurance subsidies for the highest income participants would have a minimal effect on the program and save millions of dollars. RMA is directed by law to adopt rates and coverages that will improve the actuarial soundness of the crop insurance program. Actuarial soundness under the program means that premiums are adequate to cover expected claims and a reasonable reserve. Based on GAO's analysis of agency data, participants' premiums generally corresponded to their likelihood of collecting claims payments, regardless of their income level. Also, the highest income participants account for only about 1 percent of the premiums in the program. As a result, their decisions to stay in or leave the program would likely not affect the crop insurance program's actuarial soundness at the national level. If premium subsidies had been reduced by 15 percentage points for the highest income participants from 2009 through 2013, the federal government would have saved more than $70 million over the 5-year period, according to GAO's analysis of agency data. The current income limit, enacted in 2014 for farm and conservation programs, would likely affect fewer crop insurance participants than did the previous limit. Consequently, the savings would be smaller. USDA could use existing procedures to implement a reduction in subsidies for the highest income participants. FSA has procedures to verify participants' compliance with income limits applicable to some farm and conservation programs. About two-thirds of crop insurance participants, on average, participated in programs that had income limits from 2009 through 2013 and would not need to provide additional information. Opportunities exist for RMA to access FSA's eligibility data system and work with insurance companies to apply the reduction in premium subsidies for the highest income participants. According to RMA officials, administering a provision that would reduce premium subsidies for the highest income participants would pose some challenges. For example, RMA and FSA would need to reconcile certain data on participants that are subject to the income limit. However, USDA is developing procedures to administer conservation compliance requirements in the Agricultural Act of 2014 that could help administer a premium subsidy reduction for the highest income crop insurance participants. To reduce the cost of the crop insurance program and achieve budgetary savings for deficit reduction or other purposes, Congress should consider reducing premium subsidies for the highest income participants. In written comments, USDA stated that it had no comments on the draft report.
ACF’s Children’s Bureau is responsible for the administration and oversight of federal funding to states for child welfare services under Titles IV-B and IV-E of the Social Security Act. However, the monitoring of children served by state child welfare agencies is the responsibility of the state agencies that provide the services to these children and their families. Child welfare caseworkers at the county or local level are the key personnel responsible for documenting the wide range of services offered to children and families, such as investigations of abuse and neglect; treatment services offered to families to keep them intact and prevent the need for foster care; and arrangements made for permanent or adoptive placements when children must be removed from their homes. Caseworkers are supported by supervisors who typically assign new cases to workers and monitor caseworkers’ progress in achieving desired outcomes, analyzing and addressing problems, and making decisions about cases. A number of efforts at the national level have been taken to implement comprehensive data systems that capture, report, and analyze the child welfare information collected by the states (see table 1 for information on national data systems as well as information on state systems). To qualify for federal funding for SACWIS, states must prepare and submit an advance planning document (APD) to ACF’s Children’s Bureau, in which they describe the state’s plan for managing the design, development, implementation, and operation of a SACWIS that meets federal requirements and state needs in an efficient, comprehensive, and cost- effective manner. In addition, the state must establish SACWIS and program performance goals in terms of projected costs and benefits in the APD. States are required to submit separate APDs for the planning and development phases, in addition to periodic updates. Since the administration and structure of state child welfare agencies vary across the nation, states can design their SACWIS to meet their state needs, as long as states meet certain federal requirements. Federal funding is available to states for SACWIS that meet the requirements for reporting AFCARS data to HHS; to the extent practicable, are capable of linking with the state data collection system that collects information on child abuse and neglect; to the extent practicable, are capable of linking with, and retrieving information from, the state data collection system that collects information on the eligibility of individuals under Title IV-A—Temporary Assistance for Needy Families; and provides for more efficient, economical, and effective administration of the programs carried out under a state’s plans approved under Titles IV-B and IV-E of the Social Security Act. A SACWIS must operate uniformly as a single system in each state and must encompass all entities that administer programs provided under Titles IV-B and IV-E. In some cases, HHS will allow the statewide system to link to another state system to perform required functions, such as linking to financial systems to issue and reconcile payments to child welfare service providers. The state’s APD must describe how its SACWIS will link to other systems to meet the requirements in the SACWIS regulations. In addition to monitoring the APDs of the states that are developing SACWIS, HHS reviews state information systems through formal SACWIS assessment reviews and the Child and Family Services Reviews (CFSR)—a federal review process to monitor states’ compliance with child welfare laws and federal outcome measures. The formal SACWIS reviews are conducted by ACF’s Children’s Bureau to determine if a state has developed and implemented all components detailed in the state’s APD and if the system adheres to federal requirements. The CFSR assesses statewide information systems, along with other systemic factors, to determine if the state is operating a system that can readily identify the status, demographic characteristics, location, and goals for placement of every child who is in foster care. This systemic factor is reviewed in all states, regardless of whether the state is developing a SACWIS or the stage of system development. According to results from the fiscal years 2001 and 2002 CFSRs, 4 of the 32 states in which HHS reviewed were not in substantial conformity on the statewide information system indicator. These 4 states must address how they will come into conformity with this factor in a program improvement plan. HHS has also conducted SACWIS reviews in 2 of these states. While 47 states are developing or operating a SACWIS, many challenges remain despite HHS’s oversight and technical assistance. Since 1994, states reported that they have spent approximately $2.4 billion in federal, state, and local funding on SACWIS. While most state officials we interviewed and those responding to our survey said that they recognize the benefits their state will achieve by developing a statewide system, many states reported that the development of their SACWIS is delayed between 2 months and 8 years beyond the time frames the states set for completion, with a median delay of 2-½ years. Most states responding to our survey faced challenges, such as obtaining state funding and developing a system that met the child welfare agency’s needs statewide. In response to some of these challenges, HHS has provided technical assistance to help states develop their systems and conducted on-site SACWIS reviews to verify that the systems meet all federal requirements. Currently, 47 states are developing or operating a SACWIS and are in various stages of development—ranging from planning to complete. The states responding to our survey reported using approximately $1.3 billion in federal funds and approximately $1.1 billion in state and local funds for their SACWIS. However, HHS estimated that it allocated approximately $821 million between fiscal years 1994 and 2001 in SACWIS developmental funds and $173 million between fiscal years 1999 and 2001 in SACWIS operational funds. The total amount of federal funding provided to states for SACWIS is unknown because states claimed operational costs as a part of their Title IV-E administrative expenses prior to 1999. Although the federal government matched state funding at an enhanced rate of 75 percent beginning in 1994, many states did not apply for federal funding or begin SACWIS development until 1996 and 1997 when more than $467 million—the bulk of federal funds—were allocated. Most states were still developing their SACWIS by the time enhanced funding expired in 1997, after which states could receive a 50 percent FFP for SACWIS development and operation. Although 47 states are currently developing or operating a SACWIS, all states except Hawaii received some federal SACWIS funds. For example, according to figures provided by HHS, North Carolina received approximately $9.6 million in developmental funds and North Dakota received approximately $2.4 million in developmental funds and $245,000 in operational funds for SACWIS, but both states encountered difficulties that prevented them from completing their systems. In these situations, HHS entered into negotiations with the states about the amount of money that the states must return to the federal government. In order to track states’ SACWIS development, HHS places them in six categories that identify their stage of development (see table 2). States are required to submit APD updates periodically, which inform HHS of their progress in developing SACWIS. See appendix II for a complete list of states’ phases of development. Although most states continue to advance in the development of their systems, some encounter problems that cause HHS to recategorize them into a lower stage of development. In Pennsylvania, for example, the child welfare agency encountered difficulties, such as inadequate computer software to support a comprehensive SACWIS, after attempting to implement its SACWIS in 2000. Due to these problems, the state is in the process of shutting down the system and has resubmitted an APD for a new system to HHS for review and approval for further federal funding. According to figures provided by HHS, Pennsylvania has received approximately $9.7 million in federal funding thus far. In addition, while HHS may classify a state system as complete following an assessment of their SACWIS, a state may make additional changes to the system since SACWIS, like other computer systems, continually evolve as technology and child welfare practices change. States can claim federal funding for these changes as operational expenses. For example, Oklahoma’s SACWIS was the first system to be determined complete, but it has made enhancements to its system since HHS found the system in compliance with federal requirements in 1998. In addition, Oklahoma is currently considering moving to a Web-based system. An HHS official reported that such changes do not need prior approval unless they are in excess of $5 million. In developing a system, states have considerable flexibility in the design of their SACWIS. According to HHS officials, a state should be using its SACWIS as a case management tool that uses automation to support the various aspects of state child welfare programs, such as recording child protection, out of home care, and foster care and adoption services. To further assist child welfare practice, states have designed their systems to follow the natural flow of child welfare practice in their state and have added design features to help track key events during a case. For example, in Iowa child welfare work is divided between child abuse and neglect investigations and ongoing case management for children brought into the care of the child welfare agency. As a result, Iowa designed a SACWIS to reflect this work process by linking two databases—one to record child abuse and neglect information and one to record ongoing case records— that share information with one another. In Rhode Island, the SACWIS was designed to alert caseworkers if an alleged perpetrator has been the subject of three reports of abuse or neglect within 1 year. Regardless of the findings of each report, this alert notifies the caseworker to initiate an investigation when a third report is received. Since many states are in different phases of SACWIS development, their systems currently support to varying degrees a variety of child welfare and administrative components (see table 3). According to HHS, while the components represented in table 3 are required for a state’s SACWIS to be considered compliant with federal guidance—either through an interface or built within the system—some of the subcomponents, such as a function that helps caseworkers manage their caseloads, are optional. HHS has encouraged states to automate as many functions as possible in the SACWIS in an effort to cut down on the additional paperwork or duplicative steps inherent in manual data collection. One of these services, tracking independent living, is becoming more important for states as HHS decides how to implement the Foster Care Independence Act of 1999 and considers the development of the NYTD. Some states have already started collecting data on older youth and the services they receive. Currently, 27 states reported in our survey that they are at some stage of using their SACWIS to track independent living services, and an additional 14 states plan to include this component in their system in preparation for the requirements. However, 21 of the 46 states reporting to our survey that they are developing or operating a SACWIS reported that they would have to make substantial changes to their SACWIS in order to capture this information. To assist with the design of their SACWIS, states relied on a number of different participants including internal users, such as caseworkers and managers, information technology (IT) staff, and contractors. Most states found these participants to be extremely or very helpful in the process (see table 4). In Oklahoma, for example, 150 child welfare staff from the field worked closely with the contractor in intensive work group sessions to design and test the system. To complement the caseworkers’ knowledge of child welfare practice, 43 states relied on IT staff. In Colorado, for example, IT staff said that during SACWIS design and development, they shared office space with program staff that had been assigned to help with SACWIS development. This co-location of staff aided in the exchange of information pertaining to the development of the system. Finally, 42 states reported that they hired private contractors to conduct a large part of SACWIS design and development. The contractors helped states meet federal requirements, designed the system with state specific options, wrote the necessary software, tested and implemented the system, and trained users. At the time of our review, HHS reported that 4 states were not pursuing SACWIS development and most of these states reported various reasons in our survey for not developing a system. In Hawaii, the child welfare agency chose not to pursue SACWIS because it already had a statewide system in place that it believed was adequately meeting its needs and which was collecting and reporting federal child welfare data. After an attempt to develop a system, North Carolina cancelled its efforts because it could not build consensus across its 100 counties on the design of a uniform system. On our site visit to North Carolina, child welfare officials reported that they are currently working on a statewide information system that will encompass a number of social services, such as food stamps and mental health services, but an HHS official reported that North Carolina is not seeking federal SACWIS funding to support the development of this system. Vermont officials reported that they did not pursue SACWIS because the legislature declined to provide the matching state funds. In retrospect, they believe that the choice not to develop SACWIS was best for the state because they found the SACWIS requirements too restrictive to enable the state to design a system to meet its needs. Officials said that the state would not use a number of the required SACWIS components, such as developing all the required electronic links to other agencies’ systems, especially since the state has a small child welfare population. Another state—North Dakota—did not report in our survey the reason for stopping SACWIS development; however, HHS officials reported that the state had attempted to develop a SACWIS, but faced a variety of problems, such as receiving state funding. While most state child welfare agency officials said they recognize the benefits the state will achieve by developing SACWIS, such as enhancing their ability to track the whereabouts of foster children, 31 state agencies lag behind the time frames they set for completion, with 26 states reporting delays ranging from 2 months to 8 years. State officials reported in our survey and during site visits that SACWIS has contributed to more efficient and effective agency functioning, which can improve states’ capabilities to manage their child welfare cases, including keeping track of where the children are living and the services they are receiving. Child welfare officials in Colorado reported that automation has improved agency functioning by making child welfare case information available statewide, which is especially helpful when families move from one county to another. In Oklahoma, caseworkers and state officials noted that they believe their children are safer since the implementation of SACWIS simply because the information on the children is easily accessible to the caseworkers and their supervisors. According to survey results, automated systems provided easier access to data and allowed caseworkers to better monitor children in their care, which may contribute to additional child welfare and administrative benefits, such as decreased incidences of child abuse and neglect, shortened length of time to achieve adoption, timeliness of payments to foster families, and timeliness of payments to foster facilities (see table 5). New Jersey reported in our survey that its goal in developing a SACWIS is to integrate the more than 40 stand-alone systems that currently capture information on the children served by their child welfare agency. By pulling all of these systems together into a uniform SACWIS, the state hopes to improve the recording of casework activities in a timely manner and to develop a tool to better target resources and services. Effectively integrating these systems will require the state to use a disciplined IT management approach that includes (1) detailed analyses of users’ needs and requirements, (2) a clearly defined strategy for addressing information needs, and (3) sufficient technical expertise and resources to support the effort. Despite the benefits that many states have accrued with SACWIS, 31 states reported in our survey that they have been delayed in system completion beyond their initial deadline and identified a number of challenges that have led to the delay (see table 6). Some of the common difficulties states reported in developing SACWIS included receiving state funding approval, reaching internal agreement on system development, and creating a system that reflects child welfare work processes and is user friendly (see table 7). Forty-two states reported challenges receiving funding approval, and 32 states reported that insufficient state funding allocations for SACWIS development were a challenge in developing a comprehensive system. For example, Vermont officials reported that the state legislature declined to provide the matching state funds needed to secure federal funding for SACWIS. As a result, the state could not pursue development. In addition to the development challenges reported in our survey, 2 of the 5 states we visited reported that insufficient funding affected ongoing SACWIS maintenance. In Colorado, state agencies have received a series of budget cuts, which child welfare officials report have impacted their ability to respond to child welfare caseworkers’ needs for system improvements. In Iowa, child welfare officials reported that the state legislature appropriated $17,000 for state fiscal year 2002 for all child welfare automated systems activity, which they noted was an insufficient amount to maintain and upgrade systems as well as to pay staff. They reported that, as a result, the child welfare agency provided the information systems department with approximately $1 million from other parts of the agency’s budget. Despite user involvement in system design, some states still faced challenges trying to reach internal agreement among agency officials and caseworkers on the design of a system, resulting in a delay in development. In New York—a state where the counties are responsible for administering child welfare services—the development of SACWIS was stalled when significant frustration with the system’s design led Commissioners from five large counties and New York City to request that the state stop SACWIS development until a reassessment of the design and plans for the implementation of the system was completed. After a thorough evaluation of the project, the state made changes to the project plan and developed statewide work groups to ensure all counties were in agreement with the system design. In addition, they hired a contractor to monitor system development and ensure that all users’ requirements are seriously considered. Similarly, despite states’ heavy reliance on contractors, many reported that securing contractors with knowledge of child welfare practice was a challenge for timely SACWIS development. Contractors are hired by the state for their system development knowledge but often are unfamiliar with child welfare policies and practices, especially since they vary from state to state. Officials in Colorado, for example, said they encountered difficulties with their contractors because of high turnover among the contractor staff and their lack of knowledge of child welfare policies. A contractor who has worked with 7 states to develop their SACWIS reported that contractors are asked to learn the child welfare business practices of a state in a short amount of time and that states cannot devote many resources, such as caseworkers, to help in the design process because caseworkers need to devote their time to providing services to children and families. Therefore, contractors often have to acquire knowledge on their own. Many states reported that creating a system that reflects child welfare work processes and is user friendly was a challenge in developing SACWIS. These issues were also identified in the federal reviews of states’ SACWIS. For example, one state explained in the SACWIS review that it had designed a system to meet the caseworkers’ needs and reflect the nature of the child welfare work processes by developing a system that required events to be documented as they occurred. However, this design limited the SACWIS’s functionality because it did not allow the caseworkers to go back and enter information after an event happened. The state explained that caseworkers do not use the system in real time, but provide services to the children and families and then record the information in the system. The state had to redesign the system to correct for this design flaw. In addition, the 14 states reporting that they have adapted a system from another state have experienced some challenge modifying the systems to reflect their work processes. While HHS advises states to consider adapting another state’s system if it requires few changes, states report that they and their contractors were not always aware of the changes that would need to be made to adapt a system. Although Colorado and New York decided to modify another state’s SACWIS instead of designing a new system, child welfare officials in these states reported that adapting a SACWIS from another state has created more problems than anticipated. Colorado and New York adapted systems from state-administered child welfare agencies, which required extensive modifications to meet their needs as county-administered states. For example, Colorado needed a system that supported its administrative structure and could handle a larger number of cases. HHS has assisted states in a variety of ways in developing and completing their SACWIS. As a part of its regulatory responsibilities, HHS must review, assess, and inspect the planning, design, development, installation, and operation of SACWIS. In addition to reviewing and monitoring states’ APDs, HHS conducts on-site SACWIS reviews to comply with these responsibilities. HHS officials told us that these reviews are a detailed and thorough assessment of state systems to ensure the systems’ compliance with SACWIS requirements. In addition, officials reported that they provide verbal technical assistance during the on-site review to help states that do not fully conform with the applicable regulations and policies. At the time of our review, HHS had conducted 26 SACWIS reviews—5 of which were determined as meeting all the requirements and classified as complete. HHS officials told us that since states have the flexibility to build a SACWIS that meets their needs, a large portion of the formal reviews concentrate on ensuring that the systems conform to state business practices. For example, while SACWIS regulations require that a state report all AFCARS data from their SACWIS, one state HHS reviewed relied on a separate state system to report data on the children served by the juvenile justice agency who are eligible for IV-E foster care funds. The state proved it had developed an automated process to merge data from both systems to compile a single AFCARS report that included children captured in both their SACWIS and juvenile justice systems. Therefore, HHS recognized that this process best met the state’s needs and determined the SACWIS to be complete and meeting all requirements. Few systems have been determined complete after an on-site review because of unresolved issues, such as not being able to build links to other state information systems or not implementing certain eligibility determination functions. To help states address some of these development challenges, the SACWIS review team provides the state with recommendations for complying with SACWIS requirements. For example, HHS observed during a review in one state that the SACWIS was available statewide, but information collected in one county was not available to caseworkers in other counties. The federal officials offered recommendations to the state to meet the SACWIS requirement that all information be available statewide. In addition, HHS officials reported that once the draft report with the results of the SACWIS review are completed, federal staff schedule a conference call with the state officials to walk through the system’s deficiencies and offer guidance on how the state can move forward. HHS facilitates the sharing of information between states developing SACWIS through an automated system users group that allows state and federal officials to exchange information, ideas, and concerns. According to some state and HHS officials, the trust level at these meetings is very high, which promotes open discussions and also creates an atmosphere for informal dialogue with HHS. The systems users group developed out of another active group—the child welfare users group—when HHS solicited state representatives to help HHS define a model child welfare information system, which was later used as the basis for the SACWIS functional requirements after the passage of the 1993 legislation authorizing enhanced federal funding. State officials in Iowa and New York reported that the systems users group continues to play an important role in providing a forum for the honest exchange of information on SACWIS development. For example, child welfare and technical officials in New York said that the systems users group has been very beneficial because they have learned from other states’ positive and negative experiences in developing SACWIS, as well as the experiences unique to states with county-administered agencies. In addition to the users group, HHS officials also sponsor a listserv—an electronic mailing list—that allows state officials to exchange information, and a monthly conference call with state information technology directors. Iowa child welfare information technology officials said that they find the monthly SACWIS telephone conference call helpful because project managers discuss issues such as promising practices and new regulations. Technical assistance for SACWIS development is also available to states through the National Resource Center for Information Technology in Child Welfare (Resource Center). According to survey results, 9 states said they used the Resource Center for assistance in developing SACWIS and 14 states reported using it for help with SACWIS maintenance and improvements. According to Resource Center officials, they assist states with SACWIS development by helping states understand the technology that is available for use, providing information on the automation of child welfare work and converting data, and reviewing the APD documentation. For example, the Resource Center offered technical assistance to Pennsylvania to help the state decide if it should continue development of its current SACWIS, abandon the SACWIS project and allow the counties to operate individual systems, or design a different SACWIS. The Resource Center evaluated the current SACWIS to determine if it could capture information based on the SACWIS regulations and if it was user friendly for the caseworker. Following the Resource Center’s analysis, Pennsylvania decided to discontinue the existing SACWIS and develop a new SACWIS. When the Resource Center opened in 1999—5 years after many states started developing SACWIS—staff were not very familiar with many of the efforts states made during development. In an attempt to remedy this lack of knowledge on states’ issues developing SACWIS, Resource Center staff participated in some of the on-site SACWIS reviews conducted by HHS. Both HHS and Resource Center officials believe this exposure to the SACWIS systems enhanced the availability of technical assistance resources and knowledge available to the states. Several factors affect states’ ability to collect and report reliable data on children served by state child welfare agencies, and some problems exist, such as a lack of clear and documented guidance, with HHS’s oversight and technical assistance. Almost all of the states responding to our survey reported that insufficient caseworker training and inaccurate and incomplete data entry affect the quality of the data reported to HHS. In addition, 36 of the 50 states that responded to our survey reported that technical challenges, such as matching their state data element definitions to HHS’s data categories, affected the quality of the data that they report to the federal government. For example, North Carolina officials told us that while state policy mandates that they count every location in which a child resides, including hospital stays, AFCARS regulations say that hospital stays and other short-term placements should not be included in the count of foster care placements. In cases where state policy differs from federal policy, state officials must carefully re-format their data in order to meet federal reporting requirements. Similarly, during assessments of 6 states’ compliance with AFCARS reporting standards, HHS found that these issues affect data reliability. Despite the assistance that HHS offers to states, such as testing state data quality and providing the results to states to aid them in resubmitting data, states report ongoing challenges receiving clear and documented guidance and accessing technical assistance. Almost every state responding to our survey and all the states we visited reported that insufficient training for caseworkers and inaccurate and incomplete data entry affect the quality of the data reported to AFCARS and NCANDS (see fig. 1). Although most states reported these as separate factors, HHS and the states we visited found that insufficient training and inaccurate and incomplete data entry are often linked. For example, in official reviews of states’ information systems capability to capture data and report them to AFCARS, HHS advised states to offer additional training to caseworkers on several AFCARS data elements, such as recording the reasons for a child leaving foster care, to improve the accuracy of the data submitted. Similarly, Oklahoma reported that the state found that caseworkers were misinterpreting reports of policy violations by foster parents and inaccurately recording them as abuse or neglect allegations. However, state officials told us that training is typically one of the first programs cut when states face tight budget restrictions. For example, Iowa officials told us that training has been significantly reduced in recent years because of budget cuts and new workers may wait 2 to 3 months before being trained how to enter data appropriately into their SACWIS. Inaccurate and incomplete data entry can also result from a number of other factors, such as caseworkers’ hesitation to ask families for sensitive information. For example, caseworkers in Oklahoma reported that they did not feel comfortable asking if a child’s mother was married at the time of birth or if a child is of Hispanic origin—both of which are required AFCARS data elements. In commenting on a draft of this report, Oklahoma added that caseworkers did not understand why the data elements were required and how the federal government used the information. In addition, Iowa state officials said that caseworkers may guess the racial backgrounds of children in their care or record them as unknown, especially when children come from mixed racial backgrounds, rather than asking the family for the information. HHS noted similar issues in 5 states that have had an AFCARS review. Caseworkers were inaccurately recording a child’s race as “unable to determine” even though this option should be selected only if the child’s parents or relatives cannot provide the information, such as when a child is abandoned. Caseworkers, supervisors, and managers in the 5 states we visited reported that additional factors, such as difficulties balancing data entry with the time that they spend with the families and children, contributed to inaccurate or incomplete data entry. In addition, our recent work on caseworker recruitment and retention found that caseworkers struggle to balance the time they spend with children and data entry, and reportedly spend at least 50 percent of their time documenting case records. Supervisors in Iowa explained that since caseworkers are responsible for ensuring that children and their families receive the services they need, the caseworkers tend to initially limit data entry to the information that is necessary to ensure timely payment to foster care providers, and complete all other data elements when the caseworkers have time. In addition, caseworkers in Colorado said that they are between 30 and 60 days behind in their data entry, so the information in the automated system may not accurately reflect the current circumstances of children in care. The caseworkers reported that they tend to concentrate only on entering data that will allow them to open a case in their SACWIS. HHS’s Inspector General recently issued a report in which more than two-thirds of the states reported that caseworkers’ workloads, turnover, a lack of training, and untimely and incomplete data entry affected the reporting of AFCARS data. In addition to data quality being affected by caseworker issues, many states experienced technical challenges reporting their data to HHS. The problems reported by states are typically a result of challenges associated with data “mapping”—matching state data elements to the federal data elements. For example, 36 states reported in our survey that matching their state-defined data to HHS’s definitions affected the quality of the data reported to NCANDS and AFCARS. Similarly, 24 states reported that matching the more detailed data options available in their states’ information systems to the federal data elements affected the quality of the data reported to NCANDS. Twenty-nine states reported that this issue created challenges in reporting data to AFCARS. For example, following an AFCARS assessment, HHS instructed a state that collects detailed information on children’s disabilities, such as Downs Syndrome, Attention Deficit Disorder, and eating disorders, to map the information to the more limited options in AFCARS, such as mental retardation and emotionally disturbed. The Inspector General’s report found that states faced similar challenges mapping their data to meet the AFCARS reporting requirements. In many cases, states have to balance state policy with federal requirements to ensure that they are reporting accurate data to AFCARS and NCANDS, but are not contradicting their state policies. For example, Texas officials reported that although the findings of their AFCARS review instructed them to modify their SACWIS to collect, map, and extract data on guardianship placements, the state does not support guardianship arrangements. In addition, a recent report from the Child Welfare League of America (CWLA) found that when reporting the number of times children move from one foster care placement to another, states varied in the type of placements included in that count. For example, 29 percent of the states responding to CWLA’s survey included respite, 25 percent included runaways, and 16 percent included trial home visits when reporting the number of placements a child had during the AFCARS report period. According to federal guidance, the “number of placements” element is meant to gather information on the number of times the child welfare agency found it necessary to move a child while in foster care and that by including runaways or trial home visits, a state is inflating the number of moves a child experienced. However, North Carolina officials told us that although the federal definition for placements instructs states not to include such stays when counting the number of children’s foster care placements, the state instructs them to count each time a child is sleeping in a different place as a new placement. The Inspector General reported that the placement definitions were the most commonly cited source of confusion among the states surveyed. In addition to the challenges reported in our survey, HHS reported that transferring data from older data systems into SACWIS affects the quality of the data reported to AFCARS and NCANDS. HHS officials reported that they have observed that states experience the biggest change in data quality when they begin reporting from their SACWIS. In general, the first data submissions are of low quality because of the time it takes states to transfer data or the system re-sets the information for data elements. For example, in 1 state, 65 percent of the records reviewed by HHS during an AFCARS assessment recorded the date the children were removed from their homes as July 28, 1997—the date the SACWIS came on-line; however, the actual dates of removal for these children ranged from 1988 to 1997. HHS provides technical assistance for AFCARS and NCANDS reporting through a number of resources. HHS officials in the central office and NCANDS contractor staff serve as the points of contact for states to ask questions and seek guidance on reporting child welfare data. HHS officials reported that assistance is offered in a number of ways, including telephone and e-mail communication. The officials in 3 of the 5 states that we visited said that the one-on-one focused technical assistance was useful when provided in a timely fashion. Most state officials found the NCANDS data easier to report, in part because more people were available for consultation and they were more accessible and responsive. For example, states have access to four NCANDS specialists and staff in the contractor’s central office when they need assistance reporting child abuse and neglect information. However, some of the states we visited reported that only one or two staff in HHS’s central office are available to assist with AFCARS reporting. In addition, the Resource Center offers states assistance with improving data quality; however, Resource Center staff reported that the assistance is geared more towards improving the limited data used in the federal review process to monitor states’ compliance with child welfare laws and federal outcome measures—CFSR—rather than all the data reported to HHS. The Resource Center also sponsors an annual information technology conference during which sessions covering all data-related issues are held, including practices for ensuring data quality and outcome evaluation in child welfare. In conjunction with the national data conference, the HHS officials and the contractors that operate NCANDS hold an annual technical assistance meeting for states to share ideas with one another, discuss data elements that pose difficulties, and explore ways to address these problems. For example, at a recent technical assistance meeting, approximately 43 state representatives attended sessions on preparing the calendar year 2002 NCANDS data submissions and received a detailed explanation of how the NCANDS staff test states’ data submissions for quality. In addition, an NCANDS state advisory group meets annually to talk with HHS officials about NCANDS data and their experiences reporting data. From these meetings, the state advisory group proposes changes or improvements to NCANDS. HHS and state officials reported that this partnership has helped ease some of the challenges in reporting child abuse and neglect data. In addition to the direct assistance through consultation with HHS officials and the Resource Center, HHS has made available to states the software it uses to examine states’ AFCARS and NCANDS submissions for inconsistencies and invalid data. Officials in all the states we visited said that they regularly use this software, and an HHS official said that nearly every state has used the software at least once. When the data are submitted to HHS, they are run through the same software, and HHS notifies the states of areas where data are missing or inconsistent and allows the states to resubmit the data after errors are corrected. For example, HHS officials said that they worked with one state that was trying to determine the source of data errors in reporting to AFCARS the race or ethnicity of children in their care. The state was not able to determine the source of the problem, so an HHS official examined the state’s submissions and helped correct the data errors. The officials reported that these tests help them to identify some data quality errors, such as missing data, and said that they believe that, in general, data have improved in recent years. However, they indicated that the tests cannot pinpoint the underlying problems contributing to these errors. Furthermore, one official reported that no specific efforts have been conducted to track the individual data elements and, therefore, HHS cannot report on how data quality has changed over time. The results of these quality tests had been the basis for penalties levied against states that submitted low quality AFCARS data before the penalties were rescinded. HHS officials reported that the penalties served as an effective motivation to states to correct their data. Although HHS was not able to report how the lack of penalties might be affecting recent data quality, an official reported that the agency plans to conduct this analysis in the future. In an attempt to help states comply with the reporting standards and address some of the factors that contribute to data quality problems, HHS performs comprehensive reviews of state information systems’ ability to capture AFCARS data to identify problems associated with data collection and reporting, and to ensure that the information in the automated system correctly reflects children’s experiences in care. The assessments include a technical review of the states’ computer code, a comparison of the data from selected cases available in the information system to the case files, and an improvement plan to resolve any errors. In addition, HHS officials offer guidance to the states on improvements that can be made to the information system and changes to program code used to report the AFCARS data. HHS conducted pilot reviews in eight states between 1996 and 2000. By March 2003, HHS had conducted eight official reviews—even though states began reporting to AFCARS in 1995. According to results from six of the eight official AFCARS assessments we reviewed, no state met the reporting requirements for all AFCARS data elements. Table 8 shows a selection of the data elements and the states’ ratings. The problems noted in the reviews are similar to those we heard from states responding to our survey and those we visited. For example, most states received ratings of 2 or 3, indicating technical and/or data entry errors that affect the AFCARS data quality. State officials in these 6 states reported that they found the reviews useful for improving their AFCARS data submissions. In particular, they valued the thorough review by HHS officials of the computer code states use to report the data. Some of these officials reported that if all states were reviewed, the quality of data available in AFCARS would improve tremendously. However, HHS officials reported that they are not mandated to conduct the AFCARS reviews and that priority is placed on other reviews, such as the CFSR and SACWIS reviews. In addition, officials explained that the AFCARS reviews are not conducted in states developing SACWIS until the systems are operational. HHS expects to complete approximately four reviews each year depending on available resources and has scheduled states through 2006. Similar to the SACWIS reviews, HHS officials offer recommendations and technical assistance to states during the review on how they can improve the quality of the data reported to AFCARS. Although the states we visited appreciated some of HHS’s efforts to assist with improving state data quality, they and most states responding to our survey agreed that the assistance is not always consistent or easily accessible (see fig. 2). States reported similar information to the Inspector General—AFCARS data elements were not clearly and consistently defined and technical assistance is effective but difficult to access. The primary concerns reported by the states we visited were delays in receiving clear, written guidance on defining and reporting certain data elements and the lack of state input in suggesting changes to AFCARS. Despite the written guidance available to states in the form of regulations and an on-line policy manual, states reported that the variation in state policies and practices make it difficult to interpret how to apply the general guidance. As a result, states consult with HHS to ensure they are applying the regulations appropriately. However, in commenting on a draft of this report, officials in Oklahoma told us that a common concern among the states is the lack of timely response from HHS when seeking guidance on how to report data. In addition, officials in New York explained they have made it a practice to check the HHS Web site on a regular basis for current guidance, but have not found it a useful tool, and may turn to other states for guidance on AFCARS reporting. In commenting on a draft of this report, HHS explained that it first refers states to its Web site for information and believes that the available guidance addresses states’ concerns in most instances. In addition, the states that have had an AFCARS review experienced delays in obtaining guidance on how to proceed following the on-site review. Although they found the review to be very helpful, some states reported that HHS officials are delayed in responding to their questions. For example, Texas officials reported that the state sought clarification on its improvement plan and submitted additional questions to HHS following the review; however, when we spoke with the state officials, they said that they had been waiting 3 months for a response on how to proceed. An HHS official told us that since the review process is relatively new, the agency is still developing a process to respond to the states and recognizes that it has not been responsive to the states already reviewed. In addition, HHS is taking steps to gather feedback from states and other users of AFCARS data to determine how to improve the system to make the data more accurate and useable. As a part of these efforts, HHS has published a Federal Register notice soliciting comments and held focus group meetings at national conferences. The difficulties states face in receiving federal guidance and assistance, as well as the other challenges they face in reporting data, may negatively impact the reliability of the data available in AFCARS and NCANDS. As a result, states are concerned that the national standards used in the CFSR are based on unreliable data and should not be used as a basis of comparison and potential financial penalty. The variation in states’ reporting practices may affect the validity of the measures and may place some states at a disadvantage. For example, the CWLA and Inspector General studies found that approximately half the states include the juvenile justice population in their AFCARS reports, while the other states do not. Child welfare experts and some state officials believe that the states that include children served by the juvenile justice agency in their AFCARS report may report a higher number of re-entries into the child welfare system or a higher number of moves within the system when compared to states that do not have IV-E agreements with their juvenile justice systems. As a result, a state that includes such children in their AFCARS report are likely to fare less favorably when compared to the national standard than other states on two outcome measures—foster care re-entries and stability of foster care placements—and may face financial penalties associated with the CFSR. Some states are using a variety of practices to address the challenges associated with developing SACWIS and improving data reliability, although no formal evaluations are available on their effectiveness. To address the challenge of developing a system to meet statewide needs, states relied on caseworkers and supervisors from local offices to assist in the design and testing of the system. Few states reported in our survey strategies to overcome the other key challenges, such as limited funding and securing knowledgeable contractors, but some states we visited have devised some useful approaches. For example, Oklahoma child welfare officials—in order to maximize the limited state funding for maintaining their SACWIS—reported saving $1 million each year by hiring some of the contractors who developed their SACWIS as permanent staff. To improve data reliability, the 5 states we visited routinely review their data to identify data entry errors so that managers can ensure that the missing data are entered appropriately. In addition, some states reported that frequent use of the data, such as publishing periodic management reports detailing local offices’ performance on outcome measures, helps caseworkers understand the importance of entering timely information. To overcome development challenges, survey respondents emphasized the importance of including system users in the various phases of completing SACWIS—planning, design, development, testing, and implementation. Past GAO work and other research efforts have determined similar approaches as best practices in building information systems. Forty-four of the 46 states responding to our survey that they are developing or operating a SACWIS indicated that they relied on internal users, such as caseworkers and supervisors, in the development of their systems and 34 of these states said that they were extremely helpful participants. The extent to which the users were involved in development differed across the states. For example, in Texas, caseworkers from all of their child welfare regions were recruited to provide input on design and development, as well as during initial testing, pilot testing, and implementation of the system. Arkansas reported establishing a committee made up of users to review the work plan and sign off on recommended changes. In addition, states reported that their system users served a number of purposes, including serving as experts on the different specialties within child welfare, such as child abuse, foster care, or adoption, and as representatives from local or county offices to assist in identifying the diverse approaches to capturing information across the state. For example, Indiana reported that caseworkers involved in SACWIS development represented the unique needs of the different geographical areas of the state and helped design a uniform statewide system to meet the diverse needs of large, intermediate, and small local offices. Ten states noted that user input should not be limited to frontline workers, such as caseworkers, but should include representatives from other areas of the agency, such as the financial staff, and other agencies that serve children, such as child support enforcement. Since many SACWIS link with other state information systems, states advised that developing a collaborative relationship with other state agencies will help the development of the system. While not one of the most common challenges reported in our survey, New Hampshire reported that one of its challenges with meeting its SACWIS timeframe was not working collaboratively with other agencies, such as Temporary Assistance for Needy Families (TANF) and child support enforcement, to develop the payment component of SACWIS. Similarly, we previously reported that the difficulty developing linkages between social services agencies limits the effectiveness of all the programs to serve families. To attempt to overcome this challenge, 26 of the 46 states responding to our survey that they are developing or operating a SACWIS indicated that they included external public agency users and 23 reported using representatives from other state agencies that serve children in developing their SACWIS. Indiana said that a task force made up of representatives from the TANF and child support enforcement agencies was developed to design the linkages between the systems. In addition, Colorado officials reported that they are working with the Department of Youth Corrections—an agency that shares the SACWIS with child welfare—to ensure that the shared screens use the same definitions. In addition to seeking input from caseworkers and other system users while developing SACWIS, many states continue to include users as a part of the implementation teams, to serve as contacts in the field and provide ongoing assistance, and to provide input on system enhancements. Alabama responded in our survey that the state had “mentors” in each county to help caseworkers adjust to the new system. These mentors continue to provide ongoing support now that the system is implemented. Similarly, Oklahoma developed Field Implementation Teams consisting of one contractor and one child welfare staff person. During system implementation, the teams went to field offices to provide on-site assistance with using SACWIS and becoming accustomed with the new method of recording child welfare information. Furthermore, Oklahoma recruits experienced child welfare field staff for its SACWIS help desk because of their knowledge of the system and child welfare policy and practice. Although states faced other challenges in completing their SACWIS, few reported implementing approaches to overcome the barriers. According to survey results, a common problem states faced in developing SACWIS was receiving insufficient state funding for development. However, in our previous work on managing information technology, we found that the IT products can become obsolete in a matter of months rather than years, calling for more frequent investments in upgrades and enhancements. In addition, officials in Iowa told us that maintaining systems takes just as much money as building them. States did not report in our survey approaches for obtaining more funding for developing SACWIS, and few states reported developing strategies in an attempt to overcome the challenges associated with tight budgets for maintaining their systems. For example, Iowa officials engaged in careful planning with system users to ensure that they addressed the highest priorities when enhancing the system. In particular, the officials reported that maintaining tight control over the development and maintenance processes helps them avoid investing inordinate amounts of resources to make corrections to the system. In Oklahoma, child welfare officials reported that they relied on the contractors who developed their SACWIS to conduct ongoing maintenance activities until the contract expired in 2001. At that time, the agency hired some of the contract staff as full-time state employees to continue with the maintenance activities. State officials explained that this approach ensured continuity of service, in addition to saving the agency approximately $1 million each year. Similarly, few states reported on approaches to overcome the challenge of finding contractors with knowledge of child welfare practice. However, Iowa officials explained that once the contract staff are hired, they are required to attend the same training as new caseworkers to ensure that they are familiar with the state’s child welfare policies and to familiarize themselves with casework practices. Twenty-eight states reported using approaches to help caseworkers identify the data elements that are required for federal reporting and to help them better understand the importance of entering timely and accurate data. Ten states responding to our survey reported reviewing the federal reporting requirements in training sessions as a promising approach they use to improve data quality or as a lesson learned. For example, Tennessee reported that the state added a component about AFCARS to the initial and ongoing training workers receive about using SACWIS. The curriculum addresses the AFCARS report in general and the individual data elements to help the caseworkers better understand the purpose of collecting the information. In Nebraska, a “desk aid” that explains the data elements and where and why to enter them in the system is available on the caseworkers’ computer desktops. In addition, New York has developed a step-by-step guide explaining to workers how NCANDS data should be entered, with references to the policy or statute requiring the information. To improve data reliability, some states have designed their information systems with special features to encourage caseworkers to enter the information. Four states responding to our survey and 3 states we visited designed their SACWIS with color-coded fields to draw attention to the data elements that caseworker are required to enter. For example, the AFCARS data fields in Oklahoma’s system are coded red until the data are entered, after which the fields change to blue. In addition, workers can look at a single screen in the Oklahoma system to see what AFCARS data elements need to be completed without having to scroll through the entire case record. Colorado, Iowa, New York, and Oklahoma have built into their systems alerts—also known as “ticklers”—to remind caseworkers and supervisors of tasks that they need to complete. For example, in Iowa, alerts are sent to supervisors if a caseworker fails to enter the data necessary to complete a payment to a foster care provider. Whereas, in Oklahoma, a stoplight icon on the caseworker’s computer desktop reminds the worker when tasks are due. A green light indicates that nothing is due within 5 days; a yellow light means that something is due within 5 days; and a red light means that something is overdue. Caseworkers and supervisors in the states we visited had mixed responses about the usefulness and effectiveness of the alerts. Some caseworkers found them to be a nuisance, while other caseworkers and supervisors found them to be useful tools in managing workloads and prioritizing daily tasks. Six states reported that the best way to improve data quality was to use the data in published reports and hold the caseworkers and supervisors accountable for the outcomes of the children in their care. In addition, 6 states responding to our survey reported using the data available in their information systems to measure state outcomes similar to the CFSR. State officials reported that this approach is an effective way to get local offices invested in the quality of the data. For example, North Carolina publishes monthly reports for each county comparing their performance on state data indicators, such as the length of time children spend in care, to counties of similar size and the state as a whole. County officials reported that these reports encourage workers to improve the quality of the data collected and entered into the state system since their performance is being widely published and compared to other counties. In addition, all the states we visited reported that frequent review of their data, such as using software from HHS to test their AFCARS and NCANDS data to pin-point data entry errors prior to submitting them to HHS, has helped improve data quality. When the states identify poor data, they alert the caseworkers and supervisors of needed corrections and data entry improvements. For example, Colorado runs these reports about 4 to 5 times a year, with one run occurring approximately 6 weeks before each AFCARS submission. When the data specialists find errors, they notify the caseworker to clean up the data. New York officials told us that they incorporate the results from these tests in training if a consistent pattern of errors is identified. While most states are developing statewide information systems, challenges with data reliability remain. Although SACWIS development is delayed in many states, state officials recognize the benefits of having a uniform system that enhances the states’ ability to monitor the services provided and the outcomes for children in their care. Although states began reporting to NCANDS in1990 and were mandated to begin reporting to AFCARS in 1995, most states continue to face challenges providing complete, accurate, and consistent data to HHS. In addition, the results of more recent HHS efforts, such as conducting AFCARS-related focus groups, are unknown. Reliable data are essential to the federal government’s development of policies that address the needs of the children served by state child welfare agencies and its ability to assist states in improving child welfare system deficiencies. Without well- documented, clearer guidance and the completion of more comprehensive reviews of states’ AFCARS reporting capabilities, states are limited in overcoming challenges that affect data reliability. Because these challenges still remain, HHS may be using some questionable data as the foundation for national reports and national standards for the CFSR and may not have a clear picture of how states meet the needs of children in their care. To improve the reliability of state-reported child welfare data, we are recommending that the Secretary of HHS consider, in addition to HHS’s recent efforts to improve AFCARS data, ways to enhance the guidance and assistance offered to states to help them overcome the key challenges in collecting and reporting child welfare data. These efforts could include a stronger emphasis placed on conducting AFCARS reviews and more timely follow-up to help states implement their improvement plans or identifying a useful method to provide clear and consistent guidance on AFCARS and NCANDS reporting. We obtained comments on a draft of this report from the Department of Health and Human Services’ Administration for Children and Families (ACF). These comments are reproduced in appendix III. ACF also provided technical clarifications, which we incorporated when appropriate. ACF generally agreed with our findings and commented that the report provides a useful perspective of the problems states face in collecting data and of ACF’s effort to provide ongoing technical assistance to improve the quality of child welfare data. In response to our recommendation, ACF said that we categorized its efforts as “recent” and did not recognize the long-term efforts to provide AFCARS and NCANDS related guidance to the states. Although we did not discuss each effort in depth, we do mention the agency’s ongoing efforts in our report. However, we refer to the recent efforts in the recommendation in recognition of the agency’s current activities to formally obtain, document, and incorporate feedback from the states with regard to collecting and reporting adoption and foster care data. ACF also noted in its comments that the data definitions need to be updated and revised and said it is currently in the process of revising the AFCARS regulations to further standardize the information states are to report—which we acknowledge in our report. In addition to the steps HHS is taking to further improve the AFCARS data, our recommendation encourages HHS to consider ways to enhance the ongoing guidance and assistance offered to states to help them overcome the key challenges in collecting and reporting child welfare data. ACF requested specific recommendations on approaches to overcome the difficulty of collecting and merging information from multiple state and county programs into a single national database. While there may be additional methodologies that the agency could use to overcome such challenges, our recommendation focuses on improving the guidance already offered to the states as a step to helping them better comply with the reporting requirements. In addition, ACF added that although staff turnover in state child welfare agencies is a significant contributor to data quality issues, we did not focus on this as a significant factor. ACF also commented that it is firmly committed to continue to support the states and to provide technical assistance and other guidance as its resources will permit. However, because we recently issued a detailed report on a variety of caseworker issues, we primarily focused in this report on the key data entry challenges caseworkers face and refer readers to our previous work for additional information on challenges related to caseworker recruitment and retention and their affect on child welfare agencies. In commenting on our previous work prior to its release, HHS indicated that it does not have the authority to require states to address factors that contribute to staff turnover, such as high caseloads and said that it has limited resources to assist the states in the area of staff recruitment and retention. ACF commented that it provided increased funding to the National Resource Centers in fiscal year 2003, which they believe will improve ACF’s ability to provide assistance to the states. After receiving the draft report for comment, HHS separately provided information on an additional service the National Resource Center for Information Technology in Child Welfare provides to states. In an effort to assist states with improving the quality of their AFCARS data, the Resource Center will review states’ programming code used for AFCARS data. As of June 2003, HHS reported that the Resource Center provided this assistance to Arkansas, Louisiana, Mississippi, North Carolina, Nevada, New Jersey, and Rhode Island, and 3 states—Maryland, Michigan, and Wisconsin—and the District of Columbia have requested the assistance. In response to our survey methodology, ACF requested that we explain why the territory of Puerto Rico was not included in the state survey. Although Puerto Rico receives federal child welfare funds, we traditionally focus on the states and therefore do not include the U.S. territories, including American Samoa, the Commonwealth of the Northern Mariana Islands, Guam, Puerto Rico, and the Virgin Islands, in the scope of our reviews. Finally, in response to our discussion of the AFCARS review process, ACF provided a few clarifications. During the course of our review, an HHS official characterized the AFCARS review process as relatively new and explained that the agency is still developing a process to respond to the states following the completion of the on-site review. When responding to a draft of this report, ACF disagreed with this characterization. ACF commented that the review process has been in place since 1996, pointing to the pilot reviews as evidence that the agency has a defined process. However, when we requested AFCARS reports for review, HHS explained that the states undergoing pilot reviews would be re-reviewed and that the official process was formalized in 2001 with the release of an AFCARS review guide and the start of the official reviews. In addition, ACF commented that SACWIS reviews do not take priority over AFCARS reviews. However, officials had previously explained that although SACWIS and AFCARS reviews can happen at the same time, in practice, the AFCARS reviews are scheduled to occur in the states that are developing SACWIS after they have participated in a SACWIS review. Furthermore, ACF explained that states do not develop their improvement plan following the conclusion of the AFCARS review. Instead, ACF officials draft the plan for the state. Although state representatives had described a challenge in receiving timely feedback on their improvement plan, we have changed the language in the report to reflect ACF’s comment. We also provided a copy of our draft to child welfare officials in the 5 states we visited—Colorado, Iowa, North Carolina, New York, and Oklahoma. Iowa and New York had technical clarifications, which we in incorporated when appropriate. Oklahoma provided additional information, which was incorporated. Colorado had no suggested corrections or edits. North Carolina did not provide any comments. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, state child welfare directors, and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions or wish to discuss this material further, please call me at (202) 512-8403 or Diana Pietrowiak at (202) 512-6239. Key contributors to this report are listed in appendix IV. To determine the progress states have made in developing Statewide Automated Child Welfare Information Systems (SACWIS), we surveyed all 50 states and the District of Columbia through a Web-based survey. We pretested the survey instrument in Maryland and the District of Columbia. We received responses from 49 states and the District of Columbia. The state of Nevada did not respond to the survey. We discarded a question that asked states to report the date their advance planning document (APD) was approved by the Department of Health and Human Services (HHS). Due to a technical error, the date was truncated and a valid answer was not stored in the responses. Of the 50 survey responses, 46 were from states that are pursuing SACWIS development. The 4 states not developing SACWIS were asked to skip sections of the survey that asked about SACWIS development, system modifications, and supported services and links. We did not independently verify the survey responses. In addition, we visited 5 states to obtain more detailed and qualitative information regarding states’ experiences developing SACWIS. We conducted site visits in Colorado, Iowa, New York, North Carolina, and Oklahoma. We selected these states to represent a range of SACWIS development stages, sizes of foster care populations, and geographic locations. During our site visits, we interviewed state and local child welfare staff, state and local staff that regularly exchange information with the child welfare agency, and private contractors. We also spoke with HHS staff in the central and regional offices, National Resource Center officials, contractors involved in SACWIS development, and child welfare experts from the Child Welfare League of America and the American Public Human Services Association. To determine how states and HHS ensure reliable data exist on children served by child welfare agencies we surveyed states using the above- mentioned survey instrument. In addition, we interviewed state and HHS officials on their efforts to analyze and compile data and HHS’s role in providing technical assistance to states. We spoke with state officials during our site visits and HHS officials in the central and regional offices and attended the 6th National Child Welfare Data Conference. We obtained and reviewed available SACWIS and Adoption and Foster Care Analysis and Reporting System (AFCARS) reports. At the time of our review, HHS had conducted 26 SACWIS reviews. We obtained and reviewed 23 reports. The remaining reports were not available for review because HHS has not yet completed the report or shared the results with the state. Most of the SACWIS reports were considered drafts since many states are in the process of resolving issues with completing their systems. We reviewed AFCARS assessment reports from 6 of the 8 states assessed by HHS—Arkansas, Connecticut, New Mexico, Texas, Vermont, and Wyoming. HHS conducted reviews in Delaware and West Virginia after we completed our analysis. We did not review any of the eight pilot review reports since these were not final reports and HHS plans to conduct official reviews in these states. These AFCARS assessment reports were analyzed to understand the breadth of on-site assistance HHS provides to states during the review and to identify common data collection and reporting difficulties among states. Finally, we talked with officials in 6 of the 8 states that had an AFCARS review about their experiences during the review and child welfare experts. To identify practices state and local child welfare agencies are using to help ensure the accuracy, timeliness, and completeness of child welfare data we interviewed state and local child welfare officials on our site visits and inquired about the practices they are employing. We also included questions on practices and lessons learned in our survey. In addition, we spoke with numerous child welfare experts, including individuals from the National Resource Center for Information Technology in Child Welfare, the Child Welfare League of America, and the American Public Human Services Association. In addition to those named above, Leah DeWolf and Rachel Seid made key contributions to this report. Avrum Ashery, Patrick DiBattista, Barbara Johnson, Valerie Melvin, and Rebecca Shea also provided key technical assistance. The American Public Welfare Association. Statewide Automated Child Welfare Information Systems: Survey of State Progress. Washington, D.C., July 1997. The American Public Welfare Association. Child Welfare Information Systems: Some Concepts and Their Implications. Washington, D.C., July 1994. The American Public Welfare Association. Survey of State Child Welfare Information Systems: Status of AFCARS and SACWIS. Washington, D.C., April 1995. Caliber Associates, Analysis of State Child Welfare Data: VCIS Survey Data from 1990 through 1994, May 1998, Department of Health and Human Services. Center for Technology in Government, University of Albany, SUNY. Tying a Sensible Knot: A Practical Guide to State-Local Information Systems. Albany, N.Y., June 1997. Child Welfare League of America. National Working Group Highlights, “Child Maltreatment in Foster Care: Understanding the Data.” Washington, D.C., October 2002. Child Welfare League of America. National Working Group Highlights, “Placement Stability Measure and Diverse Out-of-Home Care Populations.” Washington, D.C., April 2002. U.S. Department of Health and Human Services, Administration for Children and Families, Administration on Children, Youth and Families, Children’s Bureau. Child Maltreatment 2001. Washington, D.C., 2003. U.S. Department of Health and Human Services, Administration for Children and Families, Administration on Children, Youth and Families, Children’s Bureau. Child Welfare Outcomes 1999: Annual Report. Washington, D.C., n.d.. U.S. Department of Health and Human Services, Office of Inspector General. Adoption and Foster Care Analysis and Reporting System (AFCARS): Challenges and Limitations. Washington, D.C., March 2003. Child Welfare and Juvenile Justice: Federal Agencies Could Play a Stronger Role in Helping States Reduce the Number of Children Placed Solely to Obtain Mental Health Services. GAO-03-397. Washington, D.C.: April 21, 2003. Child Welfare: HHS Could Play a Greater Role in Helping Child Welfare Agencies Recruit and Retain Staff. GAO-03-357. Washington, D.C.: March 31, 2003. Human Services: Federal Approval and Funding Processes for States’ Information Systems. GAO-02-347T. Washington, D.C.: July 9, 2002. Foster Care: Recent Legislation Helps States Focus on Finding Permanent Homes for Children, but Long-Standing Barriers Remain. GAO-02-585. Washington, D.C.: June 28, 2002. Human Services Integration: Results of a GAO Cosponsored Conference on Modernizing Information Systems. GAO-02-121. Washington, D.C.: January 31, 2002. District of Columbia Child Welfare: Long-Term Challenges to Ensuring Children’s Well-Being. GAO-01-191. Washington, D.C.: December 29, 2000. Child Welfare: New Financing and Service Strategies Hold Promise, but Effects Unknown. GAO/T-HEHS-00-158. Washington, D.C.: July 20, 2000. Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort. GAO/HEHS-00-48. Washington, D.C.: April 27, 2000. Foster Care: States’ Early Experiences Implementing the Adoption and Safe Families Act. GAO/HEHS-00-1. Washington, D.C.: December 22, 1999. Foster Care: HHS Could Better Facilitate the Interjurisdictional Adoption Process. GAO/HEHS-00-12. Washington, D.C.: November 19, 1999. Foster Care: Effectiveness of Independent Living Services Unknown. GAO/HEHS-00-13. Washington, D.C.: November 5, 1999. Foster Care: Kinship Care Quality and Permanency Issues. GAO/HEHS- 99-32. Washington, D.C.: May 6, 1999. Juvenile Courts: Reforms Aim to Better Serve Maltreated Children. GAO/HEHS-99-13. Washington, D.C.: January 11, 1999. Child Welfare: Early Experiences Implementing a Managed Care Approach. GAO/HEHS-99-8. Washington, D.C.: October 21, 1998. Foster Care: Agencies Face Challenges Securing Stable Homes for Children of Substance Abusers. GAO/ HEHS-98-182. Washington, D.C.: September 30, 1998. Managing Technology: Best Practices Can Improve Performance and Produce Results. GAO/T-AIMD-97-38, January 31, 1997. Child Welfare: HHS Begins to Assume Leadership to Implement National and State Systems. GAO/AIMD-94-37. Washington, D.C.: June 8, 1994. Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology. GAO/AIMD-94-115. May 1, 1994.
To better monitor children and families served by state child welfare agencies, Congress authorized matching funds for the development of statewide automated child welfare information systems (SACWIS) and required that the Department of Health and Human Services (HHS) compile information on the children served by state agencies. This report reviews (1) states' experiences in developing child welfare information systems and HHS's role in assisting in their development, (2) factors that affect the reliability of data that states collect and report on children served by their child welfare agencies and HHS's role in ensuring the reliability of those data, and (3) practices that child welfare agencies use to overcome challenges associated with SACWIS development and data reliability. HHS reported that 47 states are developing or operating a SACWIS, but many continue to face challenges developing their systems. Most state officials said they recognize the benefit their state will achieve by developing SACWIS, such as contributing to the timeliness of child abuse and neglect investigations; however, despite the availability of federal funds since 1994, states reported a median delay of 2 and a half years beyond the timeframes they set for completion. States reported that they encountered some difficulties during SACWIS development, such as challenges receiving state funding and creating a system that reflected their work processes. In response to some of these challenges, HHS has provided technical assistance to help states develop their systems and conducted on-site reviews of SACWIS to verify that the systems meet federal requirements. Despite efforts to implement comprehensive information systems, several factors affect the states' ability to collect and report reliable adoption, foster care, and child abuse and neglect data. States responding to GAO's survey and officials in the 5 states GAO visited reported that insufficient caseworker training and inaccurate and incomplete data entry affect the quality of the data reported to HHS. In addition, states reported technical challenges reporting data. Despite HHS's assistance, many states report ongoing challenges, such as the lack of clear and documented guidance on how to report child welfare data. In addition, although states were mandated to begin reporting data to the Adoption and Foster Care Analysis and Reporting System (AFCARS) in 1995, few reviews of states' AFCARS reporting capabilities have been conducted to assist states in resolving some of their reporting challenges. Some states are using a variety of practices to address the challenges associated with developing SACWIS and improving data reliability. For example, 44 states included caseworkers and other system users in the design and testing of SACWIS, and 28 states reported using approaches to help caseworkers identify and better understand the data elements that are required for federal reporting.
Long-term care includes many types of services needed when a person has a functional disability, whether physical or cognitive. Individuals needing long-term care have varying degrees of difficulty in performing some activities of daily living without assistance, such as bathing, dressing, eating, toileting, and moving from one location to another. They may also have trouble with instrumental activities of daily living, which include such tasks as preparing food, housekeeping, and handling finances. They may have a mental impairment, such as Alzheimer’s disease, that necessitates supervision to avoid harming themselves or others or need assistance with tasks such as taking medications. Although a chronic physical or mental disability may occur at any age, the older an individual becomes, the more likely a disability will develop or worsen. Assistance for such needs takes many forms and takes place in varied settings, including institutional care in nursing homes or alternative community-based residential settings such as assisted living facilities, in- home care services, and unpaid care from family members or other informal caregivers. Approximately 64 percent of all elderly individuals with a disability relied exclusively on unpaid care from family or other informal caregivers; even among almost totally dependent elderly—those with difficulty performing five activities of daily living—about 41 percent relied entirely on unpaid care. Long-term care is financed through a variety of sources, primarily public programs. Nationally, spending from all public and private sources for long-term care for all ages totaled about $137 billion in 2000, accounting for nearly 11 percent of all health care expenditures. Medicaid, the joint federal-state health-financing program for low-income individuals, continues to be the largest funding source for long-term care. In 2000, Medicaid paid 46 percent (about $63 billion) of total long-term care expenditures. Individuals’ out-of-pocket payments represented the second largest source of payments for long-term care—a larger part of long-term care spending than for other types of health care services such as physicians and hospitals. These out-of-pocket payments accounted for 23 percent (about $31 billion) of total long-term care expenditures in 2000. Medicare, private insurance, and other public or private sources financed the remaining shares of these expenditures. States share responsibility with the federal government for Medicaid, paying on average approximately 43 percent of total Medicaid costs. Within broad federal guidelines, states have considerable flexibility in determining who is eligible and what services to cover in their Medicaid program. Among long-term care services, states are required to cover nursing facilities and home health services for Medicaid beneficiaries. States also may choose to cover additional services that are not mandatory under federal standards, such as personal care services, private-duty nursing care, and rehabilitative services. For services that a state chooses to cover under its CMS-approved state Medicaid plan, enrollment for those eligible cannot be limited but benefits may be. For example, states can limit the personal care service benefit through medical necessity requirements and utilization controls. States may also cover Medicaid home and community-based services (HCBS) through waivers of certain statutory requirements under section 1915(c) of the Social Security Act, thereby receiving greater flexibility in the provision of long-term care services. These waivers permit states to adopt a variety of strategies to control the cost and use of services. For example, states may obtain CMS approval to waive certain provisions of the Medicaid statute, such as comparability, which generally requires states to make all services available to all eligible individuals statewide. With a waiver, states can target services to individuals on the basis of certain criteria such as disease, age, or geographic location. Further, states may limit the numbers of persons served to a specified target, requiring additional persons meeting eligibility and need criteria to be put on a waiting list. Limits may also be placed on the costs of services that will be covered by Medicaid. To obtain CMS approval for a HCBS waiver, states must demonstrate that the cost of the services to be provided under a waiver (plus other state Medicaid services) is no more than the cost of institutional care (plus any other Medicaid services provided to institutionalized individuals). These waivers permit states to cover a wide variety of nonmedical and social services and supports that allow people to remain at home or in the community, including personal care, personal emergency response systems, homemakers’ assistance, chore assistance, adult day care, and other services. Medicare—the federal health financing program covering nearly 40 million Americans who are aged 65 or older, disabled, or have end-stage renal disease—primarily covers acute care, but it also pays for limited post- acute stays in skilled nursing care facilities and home health care. Medicare spending accounted for 14 percent (about $19 billion) of total long-term care expenditures in 2000. During the early and mid-1990s, Medicare became an increasingly significant funding source for individuals receiving continuing home health care, including home health aide services that may at times substitute for other long-term care services. The adoption of an interim payment system in 1997 to better control spending growth was followed by a sharp reduction in the number of home health visits and spending covered by Medicare. A new home health prospective payment system was implemented in October 2000 that was intended to more closely align Medicare payments with patient needs. While it provides funding that allows a higher number of home health visits per user than under the interim payment system, it also provides incentives to reward efficiency and control use of services. The number of home health visits declined from about 29 visits per episode immediately prior to the prospective payment system being implemented to 22 visits per episode during the first half of 2001. Most of the decline was in home health aide visits. Each of the states we reviewed—Kansas, Louisiana, New York, and Oregon—covered home and community-based services in their Medicaid programs, but differed in how much of their Medicaid spending for long- term care for the elderly they dedicated to home and community-based care and how they designed their programs for these services. In general, Kansas and Louisiana spent a smaller portion of their Medicaid long-term care expenditures on home and community-based services than the other two states, and many of these services had recently not been available to new clients because both states had waiting lists. New York had the highest Medicaid spending on long-term care services for the elderly, with per capita spending nearly two-and-a-half times the national average. In addition, most of New York’s home and community-based services were covered through its state Medicaid plan, making the services available to all eligible Medicaid beneficiaries. Oregon spent much less on nursing home care than other states, with a higher share of its long-term care expenditures for the elderly dedicated to home and community-based care. The four states we reviewed allocated different proportions of Medicaid long-term care expenditures for the elderly to federally required long-term care services, such as nursing facilities, and to state optional home and community-based care, such as in-home personal support, adult day care, and other home and community services. (See table 1.) New York’s expenditures for Medicaid long-term care services (including nursing facilities, home health, personal support, and other care) for the elderly was $2,463 per person aged 65 or older in 1999—much higher than the national average of $996. While nursing home care represented 68 percent of New York’s expenditures, New York also spent more than the national average on long-term care services provided at the state’s option, such as personal support services. Kansas and Louisiana spent near the national average of $996 per person aged 65 or older ($935 and $1,012, respectively), but nursing home care accounted for a higher portion of these expenditures in Louisiana (93 percent) than the national average (81 percent). Oregon spent $604 on Medicaid long-term care services per elderly individual. In contrast to the other states, Oregon spent much less per capita on nursing home care, and spent a larger portion for other long- term care services such as care in alternative residential settings. The states also differed in how they designed their home and community- based services, influencing the extent to which these services were available to elderly individuals with disabilities. In some instances, as the following examples illustrate, not all services were available to all clients, with Kansas and Louisiana having waiting lists for HCBS waiver services for new clients. Kansas: Most home and community-based services for the elderly in Kansas were offered under HCBS waivers. These services included in- home help such as personal care, household support, night supervision, assistive devices (such as shower seats), personal emergency response systems, adult day care, and respite care. As of June 2002, 6,300 Kansans were receiving these HCBS waiver services. Because Kansas recently initiated a waiting list for these services in April 2002, they were not currently available to new recipients, with 290 people on the waiting list as of June 2002. Louisiana: Most home and community-based services available in Louisiana for the elderly and disabled were offered under HCBS waivers, allowing the state to limit the number of recipients and cap the dollar amount available per day for services. One waiver, which includes such services as personal care, environmental modifications to the home (such as wheelchair ramps), and personal emergency response systems, served approximately 1,500 people in July 2002 with a waiting list of 5,000 people. The dollar cap on services provided through this waiver increased in September 2002 from $35 per day to $55 per day. The other waiver, which is exclusively for adult day health care, served approximately 525 people, with 201 individuals on the waiting list as of July 2002. New York: New York relied less on HCBS waivers for home and community-based care for the elderly and disabled than other states because these services were largely available through the state Medicaid plan. Although New York had higher spending on Medicaid long-term care services per capita than the other states in 1999, including about $500 per capita on personal support services for the elderly, spending for HCBS waiver services was a small part of Medicaid spending—$9 per elderly person. As a result, home and community-based services were largely available to all eligible Medicaid beneficiaries needing them through the state Medicaid plan without caps. Services offered through the state plan included in-home help, such as hands-on assistance and household support, and personal emergency response systems. Through a waiver, New York also offers such services as home-delivered meals, adult day care, environmental modifications, and nutritional counseling. Oregon: Oregon had HCBS waivers that covered in-home care, environmental modifications to homes, adult day care, respite care, and care in alternate residential settings such as assisted living facilities and adult foster homes. Oregon’s waiver services did not have a waiting list and were available to elderly and disabled clients based on functional need, and served about 12,000 elderly and disabled individuals as of June 2002. Oregon has established a priority system for providing services based on eligible Medicaid beneficiaries’ needs with assistance for activities of daily living. Were a waiting list to become necessary in Oregon, officials told us that the state would allocate services based on its priority categories so that those categorized as being more dependent on assistance would receive help first. Table 2 summarizes the home and community based services offered in the four states we reviewed either through their Medicaid state plan or a home and community-based services waiver. Generally, many home and community-based services are covered in each of the states, but in Kansas and Louisiana they may be limited in their level of coverage and the number of individuals served. Most often, the 16 Medicaid case managers we contacted in Kansas, Louisiana, New York, and Oregon offered care plans for our hypothetical clients—Abby, an 86-year-old chair-bound woman with debilitating arthritis, and Brian, a 70-year-old man with moderate Alzheimer’s disease—that aimed at allowing them to remain in their homes. The number of hours of in-home care that the case managers offered and the types of residential care settings recommended depended in part on the availability of services and the amount of informal family care available. In a few situations, especially when the individual did not live with a family member who could provide additional support, case managers were concerned that the client would not be safe at home and recommended a nursing home or other residential care setting. Most case managers offered in-home services for Abby and Brian except for the one scenario when Brian lives alone and requires constant supervision to ensure his safety due to his moderate Alzheimer’s disease. Several case managers noted that they would attempt to honor individuals’ preferences to remain at home unless it was unsafe to do so. For Abby, most case managers offered in-home personal care (hands-on assistance with activities such as bathing, toileting, and eating), household support (such as preparing meals and laundry), and other supplemental services (such as household modifications or an emergency response system) that would supplement the care she received from her family. When Abby lived with her daughter or elderly sister, all but 1 of the 16 case managers offered in-home care. When Abby lived alone with her daughter able to come by only once per day before going to her job, 12 case managers still offered in-home services to provide most of her care while 4 recommended that she relocate to a nursing home or other residential care setting. Similarly, in the scenarios when Brian lived with his wife, all but one case manager offered in-home care services for Brian. Most of the care plans continued to rely on Brian’s wife to provide much of the supervision of Brian’s safety and reminders for him to bathe, eat, and use the bathroom, but the care plans also offered additional in-home support to provide some hands-on care and household support. However, when Brian would otherwise have to live alone, 13 of the 16 care plans would have him move to a nursing home or other residential care setting. (See table 3.) When the case managers recommended that the individuals remain at home, the number of hours of in-home services offered varied. The care plans generally provided more paid in-home care when less informal family support was available, especially when Abby or Brian lived alone, as shown in the following examples. When Abby lived with her daughter who was overwhelmed due to also caring for an infant grandchild, the case managers recommending in-home care offered a median of 28 hours per week. However, the number of hours of in-home care in this scenario varied by case manager from 4.5 hours to 40 hours per week. In this scenario, four case managers recommended that Abby attend adult day care—which serves to both provide additional hours of care to Abby and provide her daughter with some respite. When Abby lived with an 82-year-old sister who had difficulty helping with some tasks due to limited strength, the case managers offered a median of 16 hours per week, with a range across case managers of 6 to 37 hours per week. In this scenario, one case manager also recommended that Abby receive most of her care (56 hours per week) through adult day care. When Abby lived alone with her daughter visiting for an hour each morning, the number of offered hours of in-home care was highest—a median of 32 hours per week and as many as 49 hours per week. For Brian, the number of hours of care offered more consistently reflected the amount of informal help that was available to him, as the specific examples illustrate. When Brian lived with his wife who was in fair health, the case managers offered a median of 18 hours per week of in-home care, ranging from 11 to 35 hours per week. Two case managers also offered adult day care in addition to or instead of in-home care. If Brian’s wife were in poor health, the case managers offered in-home care for a median of 22 hours per week, ranging from 6 to 35 hours per week. One care manager recommended that Brian move to a residential care facility. When Brian lived alone, two of the three care managers who had Brian remain at home offered round-the-clock in-home care—168 hours per week. Table 4 summarizes the numbers of hours of in-home care offered by care managers for each scenario. Consistent with the hypothetical individuals’ preferences to remain at home as long as possible, case managers less often recommended that the hypothetical individuals move out of their homes to a nursing home or an alternative residential care setting such as an assisted living facility, adult foster home, or adult boarding home. The case managers typically recommended the individual move only if they believed that she or he would be unsafe in their homes or, in two instances, if they were concerned that the family caregiver was at risk due to the demands of providing extensive informal care. Of the 16 case managers, 13 recommended that Brian move to a residential care setting if he lived alone, with most noting that they were concerned about his safety living at home alone or were unable to provide a sufficient number of hours of in- home supervision. Four case managers also recommended that Abby needed to move if she did not have a family member or paid caregiver who could remain with her at nighttime and assist her with using the toilet or in an emergency. In two instances when the hypothetical individuals did have a family member living with them, case managers were concerned that providing care would be too demanding either for Abby’s daughter (who also had an infant grandchild to care for) or Brian’s wife (who was in poor health) and recommended that the client move to an adult foster home. For example, one case manager was concerned that Brian’s wife, who was in poor health, would ultimately also need care if she continued to provide Brian with most of his support. In some situations, two case managers in the same locality offered notably different care plans. For example, across the eight localities where we interviewed case managers, four case managers offered in-home care while their local counterpart recommended a nursing home or alternative residential setting for Abby when she lived alone. This contrast also occurred three times when Brian lived alone and once each when Abby lived with her daughter and Brian lived with his wife who was in poor health. In a few cases, the case managers in the same locality both offered in-home care but offered significantly different numbers of hours. For example, one case manager offered 42 hours per week of in-home care for Abby when she lived alone, while another case manager in the same locality offered 15 hours per week of in-home care for this scenario. Appendix II provides a summary of the care plans provided by each case manager for each of the six hypothetical scenarios. The care plans the case managers offered for the hypothetical individuals, Abby and Brian, sometimes varied as a result of state-specific policies or practices for Medicaid home and community-based services. In particular, neither Abby nor Brian would be able to immediately receive HCBS waiver services in Kansas and Louisiana due to a waiting list. When case managers developed care plans based on HCBS-waiver services for our hypothetical individuals, Louisiana’s cap on the amount of dollars that could be spent per day limited the number of hours of in-home care that could be offered in scenarios where Abby or Brian needed more extensive care. Also, Kansas’s case managers may have been more cost-sensitive due to state review thresholds and their awareness that maintaining lower average costs per client may help other clients to be served. When out-of- home placements were recommended, case managers in Oregon consistently recommended alternatives to nursing homes (either adult foster care or assisted living) whereas case managers in Louisiana were more likely to recommend a nursing home. Other state-specific differences in the care plans included that Louisiana case managers did not offer adult day care in any of the care plans, and New York and Louisiana case managers often considered how Medicare home health services would expand or offset the Medicaid home and community-based services offered. As new clients, our hypothetical elderly individuals with disabilities would not have been able to immediately receive most Medicaid home and community-based services in Kansas or Louisiana due to waiting lists for the HCBS waiver services. As a result, our hypothetical individuals would often have fewer services available to them, only those available through other state or federal programs, until Medicaid HCBS waiver services became available or they would have to receive Medicaid-covered nursing home care. The average length of time individuals wait for Medicaid waiver services was not known in either state. However, one case manager in Louisiana estimated that elderly persons for whom he had developed care plans had spent about a year on the waiting list before receiving services. In Kansas, as of July no one had yet come off the waiting list, which was instituted in April 2002. When case managers in Kansas developed care plans based only on what services were currently available from sources other than Medicaid home and community-based services, they tended to offer fewer in-home hours and to recommend out-of-home placements twice as often as they did when the waiver services were available. Service availability also varied more widely across the state when case managers could not offer Medicaid HCBS waiver services. For example, in one area of the state, in- home help was offered using Older Americans Act funds while in another area those services were not available due to budget constraints. According to Louisiana officials, since Medicaid HCBS waiver services have a waiting list, persons needing immediate assistance who call the state help line may be referred to local councils on aging or they can contact another organization that would help them complete an application for nursing home care. In general, however, the case managers we interviewed in the four states indicated that few services were typically available outside of the Medicaid program. The number of hours of in-home care offered to our hypothetical individuals through Medicaid could be as much as 168 hours per week (24 hours per day) in New York and Oregon while case managers in Kansas and Louisiana offered at most 24.5 and 37 hours per week, respectively. The number of hours of in-home care offered was often lowest in Kansas, and in Louisiana case managers tended to change the amount of in-home help offered little even as the hypothetical scenarios changed, such that our hypothetical individuals presumably would require more assistance because there was less unpaid care available from family caregivers. (See table 5.) This variation reflects several factors case managers took into consideration when determining the amount of care to offer. These factors included the local availability of personal care attendants and other care services, the cost of the care that was allowed under their state’s Medicaid program, and the state’s review requirements for approving care plans. The number of hours of in-home care case managers in Louisiana could offer was limited by a dollar cap on waiver services of $35 per day at the time we conducted our work. Case managers in Louisiana tended to offer as many hours of care as they could offer under the cost limit. Therefore, as the amount of informal care changed in the different scenarios, the hours of in-home help offered in Louisiana did not change as much as they did in the other states. For example, when Brian’s wife was in poor health, the case managers in Kansas, New York, and Oregon usually either offered more in-home care (from 1.5 to 13.5 additional hours per week) or else offered more help through adult day care than they offered when his wife was in better health. In contrast, case managers in Louisiana did not prescribe any more hours of in-home care per week when Brian’s wife was in poor health because they could not cover more hours within the cap. Case managers in Kansas often offered the fewest hours of in-home care across all of the states we reviewed. The state had a review process whereby higher cost care plans were more extensively reviewed than lower cost care plans. Case managers recognized that Kansas’s Medicaid HCBS waiver program and other state programs providing long-term care services had recently been largely closed to new clients due to budget constraints. As one Kansas case manager told us, offering fewer hours of care may reflect the case managers’ sensitivity to the waiting list and an effort to serve more clients by keeping the cost per person low. In contrast, case managers in New York and Oregon did not indicate similar cost concerns in offering in-home care hours. When the costs of services were above the cost limit for waiver services in New York, case managers could offer most in-home care through services provided in the state plan, which were not subject to a cost limit. Further, while three case managers in Oregon expressed concern about finding live-in help or providers for lower-paying custodial services, one case manager in New York and one in Oregon offered the most in-home care possible—24 hours a day, 168 hours a week. When recommending that our hypothetical individuals could be better cared for in a residential care setting, case managers offered alternatives to nursing homes to varying degrees across the states, with those in Louisiana relying most heavily on nursing home care and those in Oregon using exclusively alternative residential settings. Case managers in Louisiana recommended nursing home care in three of the four care plans for Abby or Brian in which care in another residence was recommended. A Louisiana state official noted that care in alternative residential care settings is generally not covered through the Medicaid waiver. In contrast, case managers in Oregon never recommended nursing home care for our hypothetical individuals. Instead, case managers in Oregon exclusively recommended either adult foster care or an assisted living facility in the five care plans recommending care in another residence. (See table 6.) Case managers in Oregon twice recommended that our hypothetical individuals obtain care in other residential care settings when case managers in other states would have had them stay at home. Case managers in Kansas, Louisiana, and New York only recommended out of home placement for Abby or Brian in scenarios when they lived alone. In Oregon, however, two different case managers recommended that Abby and Brian move into an adult foster home in scenarios when they lived with a family member, expressing concern that continuing to provide care to Abby or Brian would be detrimental to the family. State differences also were evident in how case managers used other services to supplement in-home or other care. For example, across all care plans the case managers developed for Abby and Brian (24 care plans in each state), adult day care was offered four times in New York and Oregon and three times in Kansas. When adult day care was offered in the other states, it often served to provide additional hours of care for Abby or Brian as well as some relief for their caregiver. However, none of the care plans developed by case managers in Louisiana included adult day care despite the state’s Medicaid waiver for these services. Case managers may not have offered adult day care services because Louisiana covers these services under a separate HCBS waiver from the waiver that covers in- home assistance and, in general, individuals cannot receive services from two separate waiver programs concurrently. Case managers in New York and Louisiana also often considered the effect that the availability of Medicare home health services could have on the Medicaid in-home care. For example, one case manager in New York noted that she maximizes the use of Medicare home health before using Medicaid home health or other services. Several of the case managers in New York included the amount of Medicare home health care available in their care plans, and these services offset some of the Medicaid services that would otherwise be offered. In Louisiana, where case managers faced a dollar cap on the amount of Medicaid in-home care hours they could provide, two case managers told us that they would include the additional care available under Medicare’s home health benefit in their care plans, thereby increasing the number of total hours of care that Abby or Brian would have by 2 hours per week. While six Kansas and Oregon case managers also mentioned that they would refer Abby or Brian to a physician or visiting nurse to be assessed for potential Medicare home health care, they did not specifically include the availability of Medicare home health care in the number of hours of care provided by their care plans. Many states have found that offering home and community-based services through their Medicaid programs can help low-income elderly individuals with disabilities remain in their homes or communities when they otherwise would be likely to go to nursing homes. States differ, however, in how they designed their Medicaid programs to offer home and community-based long-term care options for elderly individuals and the level of resources they devoted to these services. As a result, as demonstrated by the care plans offered by case managers for our hypothetical elderly individuals in four states, the same individual with certain identified disabilities and needs would often receive different types and intensity of home and community-based care for their long-term care needs across states and even within the same community. These differences often stemmed from case managers’ attempts to leverage the availability of both publicly financed long-term care services as well as the informal care and support provided to individuals by their own family members. We requested comments on a draft of this report from Kansas, Louisiana, New York, and Oregon officials. On behalf of these states, we received oral comments from the Program Manager, Kansas Department of Aging; the Waiver Manager, Louisiana Bureau of Community Supports and Services; the Health Program Administrator, Bureau of Long-Term Care, Office of Medicaid Management, New York Department of Health; and the Manager of Community-Based Care Licensing, Office of Licensing and Quality of Care for Seniors and People with Disabilities, Oregon Department of Human Services. Two states commented on our findings concerning the extent of services case managers offered to our hypothetical individuals. The Kansas official noted that our finding that the Kansas case managers’ care plans often offered among the fewest hours of in-home care does not necessarily reflect that the care plans would not meet their health and welfare needs. She emphasized that Kansas case managers are trained to enssure that the care plans are sufficient to meet clients’ health and welfare needs and that the state reviews the care plans to provide further assurances that they are sufficient. We clarified the report to indicate that we did not evaluate the adequacy or appropriateness of the care plans offered by the case managers in meeting the hypothetical individuals’ long-term care needs. The Louisiana official commented that the state was covering as many eligible enrollees in its HCBS waivers as funding allowed, and that Louisiana’s daily cap for in-home HCBS waiver services reflects the state’s budget constraints as well as the need to be cost-effective relative to nursing home care, which had a reimbursement rate of about $85 per day as of September 2002. Two states commented on the importance of individuals’ preferences and the local availability of long-term care service providers in shaping case managers’ care plans. The Oregon official commented that case managers will develop their care plans to best reflect the preferences of their clients to receive care in their home or in community-based settings. The New York official commented that the availability of certain long-term care services, such as workers to provide in-home care and adult day care settings, varies within the state and can be an additional factor influencing the number of hours of in-home care offered in case managers’ care plans. Officials from the four states also provided technical comments that we incorporated as appropriate. We did not seek comments on this report from CMS because we did not evaluate CMS’s role or performance with respect to the availability of Medicaid home and community-based services. As agreed with your office, unless you publicly announce this report’s contents earlier, we plan no further distribution until 30 days after its date. At that time, we will send copies of this report to other interested congressional committees and other parties. We will also make copies available to others on request. Copies of this report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please call me at (202) 512-7118 or John E. Dicken at (202) 512-7043 if you have any questions. Major contributors to this report include JoAnne R. Bailey, Romy Gelb, and Miryam Frieder. To obtain information about the availability of long-term care for our hypothetical elderly individuals, we asked 16 Medicaid case managers in Kansas, Louisiana, New York, and Oregon to prepare detailed care plans for two elderly persons with physical or cognitive disabilities. For each hypothetical individual, we presented the case managers with three different scenarios illustrating different levels of informal care available from family members. The first hypothetical person was a woman, “Abby,” who had difficulty performing everyday activities due to physical limitations, while the second was a man, “Brian,” who had difficulty due to cognitive limitations. We contacted each case manager and presented detailed information, as summarized below, regarding the hypothetical individuals’ conditions, needs for assistance, and availability of informal unpaid care from family. We also provided any clarifying information that the case managers requested to be able to develop the care plans. With this information, the case managers used state-specific uniform assessment instruments and their professional expertise to develop care plans as they would with other Medicaid-eligible clients. The first hypothetical Medicaid-eligible individual we presented was Abby, an 86-year-old woman with physical limitations due to debilitating arthritis. She also has type II diabetes. Specifically, Abby is chair-bound, has developed a pressure ulcer, and has some degree of difficulty with all activities of daily living (ADL) and instrumental activities of daily living (IADL) tasks as well as with taking an oral medication. She also needs her glucose levels checked daily to monitor her diabetes. She is alert and oriented, without any cognitive impairment. Her prognosis is for little or no recovery, with decline in her current condition possible. Abby’s husband, who served as her primary caregiver, recently died. We presented three scenarios to the case managers in which Abby’s conditions and needs for assistance remained the same, but the availability of unpaid informal care provided by her family varied: Scenario 1: Abby has moved in with her 51-year-old daughter who also cares for her own infant grandchild. Abby’s daughter provides assistance with Abby’s ADL and IADL needs, but the daughter reports feeling overwhelmed caring for both her mother and grandchild. In addition, the daughter is unable to help with Abby’s diabetes testing because she does not know how to do so. Scenario 2: Abby has moved in with her 82-year-old sister who provides assistance with Abby’s ADL and IADL needs. However, the sister has limited strength and therefore is unable to provide assistance with some ADLs and IADLs, such as helping Abby to the toilet and transferring her to and from her wheelchair. During the week, the sister is also unable to fully meet Abby’s needs for bathing, laundry, and housekeeping. In addition, the sister cannot assist Abby with her diabetes testing. Scenario 3: Abby lives alone, and her 51-year-old daughter visits once each morning for 1 hour to provide assistance but is unable to provide additional assistance at other times because she works two jobs and lives in another home. As a result, Abby does not receive assistance with grooming and dressing her upper and lower body. During the day and night, she does not receive assistance with planning and preparing meals, toileting, eating, and transferring to and from her wheelchair to the toilet or bed. Each week, she does not receive assistance with transportation, bathing, laundry, and using the telephone in case of an emergency. In addition, the daughter is unable to assist with Abby’s diabetes testing. The second hypothetical Medicaid-eligible individual we presented to the case managers was Brian, a 70-year-old man with moderate Alzheimer’s disease who has been in a skilled nursing facility for 90 days following hospitalization for a hip fracture. During his stay in the skilled nursing facility, he has become physically weakened and will need physical therapy. Brian takes medication for his hip fracture and for anxiety and temporarily uses a cane when walking, but otherwise is in good physical health. Brian needs supervisory help with most ADLs and IADLs and taking his oral medication—that is, he can perform tasks such as eating and toileting if he is reminded and monitored. Due to dementia resulting from Alzheimer’s disease, he is alert but not oriented and is unable to shift attention and recall directions more than half the time. Further, he is confused during the day and evening, but not constantly. He cannot be left unsupervised. As with the first hypothetical individual, we presented three scenarios to the case managers in which Brian’s conditions and needs for assistance remained the same, but the availability of unpaid informal care provided by his family varied: Scenario 1: Brian lives with his 65-year-old wife, who is his primary caregiver and is in fair health but has recently suffered health problems. She supervises Brian with all ADLs and she performs many of his IADLs herself, but is having increasing difficulty doing these tasks due to her declining health. During the day, she would like additional assistance reminding Brian to toilet and bathe as well as with planning and preparing meals and transportation. Each week, she would like additional assistance with laundry, housekeeping, and shopping. Scenario 2: Brian’s 65-year-old wife is in poorer health than described in scenario 1, and can offer supervisory help with ADLs but cannot perform most IADLs. As a result, Brian does not receive all of the reminders he needs for bathing and toileting nor all of the assistance he needs with planning and preparing meals, transportation, laundry, housekeeping, and shopping. Scenario 3: Brian lives alone because his wife recently died. He needs constant supervision with most ADLs and help with several IADLs. He cannot be left unsupervised and does not receive reminders for bathing, dressing, grooming, toileting, eating, and taking his medications. He also does not receive assistance with planning and preparing meals, transportation, shopping, laundry, and housekeeping. We obtained care plans from 16 Medicaid case managers in Kansas, Louisiana, New York, and Oregon that detailed the long-term care services that they would offer to two hypothetical Medicaid-eligible elderly individuals—Abby, an 86-year-old woman with debilitating arthritis and who was chair-bound, and Brian, a 70-year-old man with moderate Alzheimer’s disease. Each case manager developed six care plans, representing three different levels of unpaid informal care provided to Abby and Brian by their family. The case managers we contacted were specifically responsible for Medicaid home and community-based services. While most also were familiar with other local public services available, clients could receive different care options if they sought care through other approaches, such as physician referrals or contacting local councils on aging. The care plans were based on the information presented by telephone to the case managers we selected to interview in a small town (a population of less than 15,000 people) and a large city (a population of more than 250,000 people) in each of the four states and should not be generalized to indicate what care plans other case managers in these localities or other states would likely offer. We did not evaluate the adequacy or appropriateness of the care plans offered by the case managers for meeting the long-term care needs of our hypothetical individuals. Tables 7 through 12 summarize key components of the care plans offered by each of the case managers, designated in the tables as case managers A through P, for each of the six care plans. The tables summarize the number of in-home hours of care offered by the case manager or whether a nursing home or other alternate residential care setting was recommended. The tables also provide other aspects of care offered to Abby or Brian, including whether the care manager would offer adult day care to supplement or replace in-home or other care, whether the case manager noted the availability of a nurse or home health services available from Medicare and/or Medicaid, and examples of other services (such as personal emergency response systems, assistive devices such as transfer seats, or companionship services) that may be offered through Medicaid or other federal, state, or local programs.
As the baby boomers age, spending on long-term care for the elderly could quadruple by 2050. The growing demand for long-term care will put pressure on federal and state budgets because long-term care relies heavily on public financing, particularly Medicaid. Nursing home care traditionally has accounted for most Medicaid long-term care expenditures, but the high costs of such care and the preference of many individuals to stay in their own homes has led states to expand their Medicaid programs to provide coverage for home- and community-based long-term care. GAO found that a Medicaid-eligible elderly individual with the same disabling conditions, care needs, and availability of informal family support could find significant differences in the type and intensity of home and community-based services that would be offered for his or her care. These differences were due in part to the very nature of long-term care needs--which can involve physical or cognitive disabling conditions--and the lack of a consensus as to what services are needed to compensate for these disabilities and what balance should exist between publicly available and family-provided services. The differences in care plans were also due to decisions that states have made in designing their Medicaid long-term care programs and the resources devoted to them. The case managers GAO contacted generally offered care plans that relied on in-home services rather than other residential care settings. However, the in-home services offered varied considerably.
A wide array of reform proposals have introduced the concept of personal or individual retirement accounts into the debate over Social Security’s future solvency. In evaluating these proposals we must understand Social Security’s fundamental role in ensuring the income security of our nation’s elderly; the nature, extent, and timing of Social Security’s financing problem; and the differences between the current program and a program that might include individual accounts. Social Security has long served as the foundation of our nation’s retirement income system. That system has traditionally comprised three parts: Social Security, employer-sponsored pensions (both public and private), and personal savings in the form of real and financial assets.Social Security is viewed as providing a floor of income protection that the voluntary forms of employer pensions and individual savings should build upon to provide a secure retirement. However, private pension plans cover only about 50 percent of the full-time work force, and a significant portion of the American public does not have any other significant personal savings. In addition, Social Security is the sole source of retirement income for almost a fifth of its beneficiaries. Given Social Security’s importance as the foundation of retirement income security, it has been a major contributor to the dramatic reduction in poverty among the elderly population. Since 1959, poverty rates for the elderly have fallen from nearly 35 percent to 10.5 percent. (See fig. 1.) rising costs and to give these individuals time to make the necessary adjustments to their retirement planning. Social Security’s financial condition is directly affected by the relative size of the populations of covered workers and beneficiaries. Historically, this relationship has been favorable. Now, however, the covered worker-to-retiree ratio and other demographic factors, such as life expectancy, have changed in ways that threaten the financial solvency and sustainability of this important national program (see fig. 2). necessary to meet the program’s ongoing costs. To restore solvency to the program today, we would need to immediately increase annual program revenues by 16 percent or reduce annual benefit payments by 14 percent across the board. Even if such actions were taken today, attention would need to be given to their sustainability. We measure solvency in this program over a 75-year period. As each year passes, because the system is in temporary surplus, a year of surplus is dropped from the calculation and a year of deficit is added into the 75-year average. Hence, changes made today that restore solvency only for the 75-year period will result in future actuarial imbalances nearly immediately. For this reason, we must consider what is needed to put the program on a path toward sustainable solvency so we will not face these difficult questions on a recurring basis. Another way to understand the magnitude of the problem is to consider what the system will cost as a percentage of taxable payroll in the future. If we did nothing and let the Trust Funds run out in 2032, resources equivalent to 18 percent of taxable payroll would be needed simply to finance the system in the following year—more than 37 percent higher than the revenues projected to be available under the 12.4 percent payroll tax that currently finances the system (see fig. 3). judgments today that will affect those in the future who will be asked to meet these benefit commitments. Importantly, since we can anticipate this situation, and because our economy is strong, we can act now to avoid more painful decisions in the future. A wide spectrum of Social Security reform proposals has surfaced in this debate, and they reflect different perspectives and opinions about how best to address the program’s financing problem. Let me describe briefly the two main perspectives on the appropriate benefit structure for Social Security, which are analogous to the distinction between defined benefit and defined contribution pension plans. The current Social Security system’s benefit structure is designed to address the twin goals of individual equity and retirement income adequacy. Individual equity means that there should be some relationship between contributions made and benefits received (i.e., rates of return on individual contributions). Retirement income adequacy is addressed by providing proportionately larger benefits (redistributive transfers) to lower earners and certain household types, such as those with dependents (i.e., benefit levels and certainty). The current benefit structure combines these twin goals—and the range of benefits Social Security provides—within a single defined benefit formula. Under this defined benefit program, workers’ retirement benefits are based on the lifetime record of earnings, not directly on the payroll tax contributed. Given the current design of the Social Security program and known demographic trends, the rate of return individuals will receive on their contributions is declining. In addition, as noted previously, current promised benefits are not adequately funded over the 75-year projection period. for higher returns exist because investors assume some measure of risk that the return expected may not actually be realized. To illustrate the differences between the current Social Security defined benefit structure and a primarily defined contribution structure, we recently studied the experience of three counties in Texas that withdrew from the Social Security system in 1981 and substituted a defined contribution plan for Social Security. The Texas plans offer retirement, survivors, and disability benefits. Although contributions are somewhat higher than those of Social Security, they are roughly comparable when Social Security’s financing gap is considered. Benefits are based on contributions and earnings from investments. Under the Texas plans, contributions are invested conservatively in fixed income securities that are readily marketable. We simulated the benefits that typical workers could receive under these plans and compared them with what would have been received under Social Security. We found that for higher income workers the Texas plans provided higher benefits, especially initially. However, because of the Social Security benefit formula “tilt” toward lower earners, many such workers could have done better under Social Security. Other features of Social Security, such as adjustments for inflation, also suggest that many median-wage workers might have done at least as well, if not better, had they stayed under Social Security. However, the Texas plans followed a relatively conservative investment strategy with lower returns than are usually assumed in most individual account proposals. Nonetheless, our analysis does suggest we need to be careful that those most reliant on Social Security are adequately protected. Some reform proposals incorporating individual accounts address the need for such protection by combining defined contribution and defined benefit approaches into a “two-tiered” structure for Social Security. Under such a structure, individuals would receive a base defined benefit amount with a progressive benefit formula and a supplemental defined contribution account benefit. Individuals could be guaranteed a minimum monthly benefit. This approach, however, raises a number of risks and administrative issues which I will discuss later in this statement. changing demographics through higher investment returns can help make the needed measures less severe, and this is one of the reasons many reform proposals include individual accounts. Still, creating individual accounts does not by itself address the solvency problem. Although individual accounts offer the potential to capture higher investment returns, if the accounts are adopted without the higher returns being shared within the system or without accompanying benefit reductions, the solvency problem will not be alleviated. The extent to which individual accounts affect long-term solvency depends in part upon whether the accounts are “added on” to the existing system or “carved out” of it. Some proposals add on individual accounts as a type of supplementary defined contribution tier. This approach effectively leaves the entire 12.4 percent payroll tax contribution available to finance the program while dedicating additional revenues for individual accounts. These additional revenues might come from a payroll tax increase or from future unified budget surpluses. However, this approach does nothing to help Social Security unless incremental investment income is used to either supplement Social Security revenues or offset current promised benefits. The carve out approach involves creating and funding individual accounts with a portion of the existing payroll tax rate. Thus, from the current combined payroll tax rate of 12.4 percent, a portion could be carved out and allocated to individual accounts. The obvious effect is that less revenue is available to finance the current benefit structure, so the system’s solvency is further eroded. Thus, individual accounts represent a way of using higher rates of return to raise more revenues in the future than does the existing Social Security program. At the same time, including such accounts as an element of reform requires that we consider ways to share the increased returns with Social Security or revise the existing defined benefit structure for future beneficiaries. In other words, to improve Social Security solvency, individual accounts and Social Security reform must be considered together. existing program revenues to finance higher returns over the long term, we must still be able to continue to finance ongoing benefits to retirees in the short term. This problem of “transition costs” means that we may have to devote additional resources to the program in the near term. The trade-off is that in the long run individual accounts may, if structured properly, help finance the program in a more sustainable way. Because individual accounts cannot contribute to restoring solvency without combining with Social Security in some way, it is useful to focus on the implications of individual accounts for Social Security’s defined benefit program. The existing program includes a mix of benefits covering disability, spouses and dependents, and survivors. It also includes transfers to lower earners and families. Some proposals that include individual accounts have been criticized for not fully considering these other benefits when touting the advantages of higher returns on defined contribution accounts. But most proposals address the defined benefit portion by making a number of changes and adjustments to the existing program, and some proposals incorporate a guarantee of current law benefits. I will discuss some elements of these proposals briefly and also address the issue of whether to make the individual accounts mandatory or voluntary. Decisions about the appropriate balance between the defined contribution and defined benefit portions will need to consider the purpose of the original Social Security program. The altered defined benefit portion will still be relied upon to provide a foundation that ensures an adequate and certain retirement income level. Existing proposals attempt to revise this part of the program in a variety of ways, including revising the benefit formula (usually to make it more progressive), changing features of the program (such as lowering the cost-of-living adjustment), raising the age of eligibility for normal and early retirement, or revising ancillary benefits (such as those for spouses). Most of these proposed changes are structured so as to leave current and near-term retirees unaffected. In addition, many would include an individual account element only for workers under a stated age, often around 50. “expectation gap” issues with individuals. These expectation gaps might be addressed by pooling the investment accounts and other measures. Another feature of some proposals involves a guarantee of a certain benefit level. This guarantee could be provided in tandem with other benefit structure changes such that the worker would be guaranteed a minimum benefit. One approach would guarantee the current defined benefit. If the individual account provided less than the current benefit, then the system would ensure that benefits were provided to fill the gap. Such an arrangement might be desirable from a benefit adequacy perspective but would require safeguards against the government becoming an insurer of excessive risk-taking by individuals. Clearly, the number of proposals and features make it difficult to sort out exactly what should be done. We need to study carefully what impacts any given proposal would have, not only on the overall cost of the system but also, very importantly, on individuals and families. One basic feature in this regard concerns whether to make investment in individual accounts mandatory or voluntary. Insofar as individual accounts are intended to substitute for a portion of benefits provided under current law to make it easier to finance the program, most discussion has involved accounts that are mandatory. This is consistent with the stated goal of Social Security to ensure a measure of income protection in old age. The notion of making the accounts voluntary has entered the debate through proposals that seek to maintain the existing benefit structure of the program. A voluntary account is an add-on approach that would supplement Social Security benefits and provide a measure of individual choice. But under such an approach the overall implications for retirement income would be uncertain. If the voluntary account was supplementary, then it might be difficult to determine whether a voluntary account added to total retirement income; it might merely substitute for other forms of saving. after they retire. The accounts could thereby contribute to overall retirement security, not just retirement income security. Not all proposals for individual accounts clearly delineate how these accounts would be administered, but those that do vary in three key areas: (1) who would manage the information and money flow needed to maintain a system of individual accounts, (2) how much choice and flexibility would individuals have over investment options and access to their accounts, and (3) what mechanisms would be used to pay out benefits upon retirement. Decisions in these areas would have a direct effect on system complexity and who would bear the costs and additional responsibilities of an individual account system as well as on the adequacy and certainty of retirement income for future retirees. Essentially, most of the decisions about the design of a system of individual accounts amount to trade-offs between individual choice and flexibility on the one hand and simplicity and standardization on the other. A full assessment of the implications of these trade-offs will be essential to the debate on whether and how to implement individual accounts. Table 1 summarizes some of the administrative functions that would accompany any system of individual accounts, the critical decisions associated with each function, and a partial list of the options that could be considered. When considering the design of a system of individual accounts, the first important decision involves account administration and management—that is, where and how the information on individuals’ contributions and the accompanying money flow would be recorded and managed. There are several ways in which this could be done, and the options span a continuum ranging from a centralized record-keeping system managed by the government to a completely decentralized system managed by various entities in the private sector. Each option offers advantages and challenges. down by taking advantage of economies of scale. For example, administrative costs for the federal Thrift Savings Plan, which centralizes both the record-keeping and investment functions, are low—averaging about $17.00 per account in 1998. Centralizing these functions by building on the current system would not be without challenges, however. Under the current system, employers report earnings and contributions on an individual basis only once per year; it would take at least 7 to 22 months from the date an individual made a contribution to the date this information could be attributed to an individual’s record. This time lag would likely make it necessary to pursue interim investment alternatives and to educate individuals on the nature and impact of the lag. Options to change the system to enable more timely recording and investing of contributions do exist, but they would require significant changes in the record-keeping systems of the government agencies, additional costs and reporting burdens for employers, or both. If individual accounts were not centralized, they could be built upon a model similar to either the current 401(k) or Individual Retirement Account (IRA) systems. While providing a wider range of alternatives for individuals, this approach would be accompanied by additional responsibilities and costs for employers, workers, or both. For example, under a 401(k) model, employers would bear the responsibility for creating an infrastructure to quickly deposit contributions and provide employees with links to and choices among investment managers. Building on an existing employer structure such as this would pose challenges and could prove costly to employers, however, because about 50 percent of the private sector workforce is not covered by an employer-provided retirement plan. Under an IRA approach, individual employees would bear the responsibility on their own to select an investment manager or managers and deposit their contributions. Under both of these decentralized options, the appropriate government oversight role would have to be weighed and considered. proposals would be finding the right balance between individual choice and the related risks and costs to the individual and the government. These inherent trade-offs should be considered carefully. Proposals that build upon a centralized system often assume that the government or some independent oversight entity would select a fund manager (or managers) through a competitive bidding process. Individuals would then select from among the investment options offered by a designated party. Some propose that these options be limited to a small set of passive or indexed funds similar to those offered under the federal Thrift Savings Plan, thus minimizing risk to the individual while providing some degree of choice. Such an approach would also serve to minimize administrative costs and program complexity. However, a centralized system of individual accounts also raises the risk that investment decisions could become politicized, depending on the extent of government’s role in selecting the funds and fund managers and in other investment or fund allocation decisions. There are, however, ways in which these risks could be mitigated (e.g., employing master trust concepts or creating individual participation pools). Other proposals would permit individuals more discretion in selecting their fund manager or managers, either through their employers or directly in the private market. Under this model, individuals would be able to select from among a much broader range of investment options, thus providing individuals with wider latitude to maximize their returns and enhance their retirement incomes. However, with that wider range of choices would come the attendant risk to individuals that their retirement income would not be adequate, as well as risk to the government that individuals with inadequate retirement income would turn to the government for support from other programs. In addition, a wider range of choices could also lead to added administrative complexity and higher administrative costs, which, if not offset by significantly higher returns, would further undermine individuals’ retirement income. effect of choosing among alternatives offered for annuitizing or otherwise withdrawing or borrowing accumulations from the accounts. This would be especially important for individuals who are unfamiliar with making investment choices, for example, low-income and less well-educated individuals who may have limited investing experience. Moreover, the more choices offered, the more extensive the educational effort would need to be. If fewer investment choices were offered, the educational effort could be less costly. Who would provide such information to workers or who would bear the cost is not clear, but it might be possible to draw from experiences in the private pension system. The final design element centers around how the accumulated earnings in individual accounts would be preserved for retirement. Ensuring that retirement income is available for the life of the retiree is a fundamental goal of Social Security. Two important decisions relate to preservation. The first is whether to allow access to the accounts by workers before retirement (e.g., through borrowing). For example, most 401(k) pension plans allow participants to borrow against their pension accounts at relatively low interest rates. In prior work, we reported that relatively few plan participants—less than 8 percent—had one or more loans from their pension accounts at a specific point in time. However, those plan participants who borrow from their pension accounts risk having substantially lower pension balances at retirement and, on average, may be less economically secure than nonborrowers. While some may argue that individuals should be allowed the freedom to optimize their lifetime income through borrowing from their accounts before retirement, the added complexity and potential diminution of retirement income need to be given serious consideration. The second important decision is how much flexibility to permit workers when they retire and begin to draw on their accounts. Annuitization of individual accounts is one way to preserve benefits and ensure that benefits are available for the entire life of the retiree—no matter how long he or she lives. However, there are many questions to address in this area. Because these accounts would be the personal property of individuals, should annuities be required or should individuals have the option to withdraw their account balances in a lump sum or through gradual payments? Could the mechanisms that are currently available for purchasing annuities accommodate the significant increase in demand? Would new structures and additional oversight be needed? How would the various annuity options compare with those of the current system, and would they provide for survivors’ benefits? Should annuities offer protection from inflation? Once again, this is not an all-or-nothing proposition. For example, it would be possible to require that individuals annuitize that portion of their accounts which would ensure a minimum retirement income and then provide more flexibility for any funds remaining. Many people have expressed concerns about the administrative costs of individual accounts and how these costs would affect accumulations, especially for the small account holder. Each of the decisions discussed above could have a significant effect on the costs of managing and administering individual accounts, and it will be important to consider their effect on the preservation of retirement income. Administrative costs would depend upon the design choices that were made. The more flexibility allowed, the more services provided to the investor, and the more investment options provided, the higher the administrative costs would be. For example, offering investors the option of frequently shifting assets from one investment vehicle to another or offering a toll-free 1-800 number for a range of customer investment and education services could significantly increase administrative costs. Moreover, in addition to decisions that affect the level of administrative costs, other factors would need to be carefully considered, such as who would bear the costs and how they would be distributed among large and small accounts. because individual accounts would be highly visible to individuals and would represent “their money.” The Congress faces significant challenges in restoring sustainable solvency to Social Security. We have a historic opportunity to meet these challenges because of the strength of our economy and future budget surpluses. We also have a historic responsibility—a fiduciary obligation, if you will—to leave our nation’s future generations a financially stable system. I believe it is possible to craft a solution that will protect the Social Security benefits of the nation’s current retirees while ensuring that the system will be there for future generations; and perhaps the answer does not lie solely in one approach or the other—defined benefit or defined contribution. Bridging the gap between these approaches is not beyond our ability. GAO and I stand ready to provide the information and analysis that can help the Congress meet this challenge in a way that can exceed the expectations of all generations of Americans. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed how best to ensure the long-term viability of the nation's social security program. GAO noted that: (1) social security forms the foundation of the nation's retirement income structure, and in so doing, provides critical benefits to millions of Americans; (2) yet, problems facing this program pose significant policy challenges that need to be addressed soon in order to lessen the need for more dramatic reforms in the future and to demonstrate the federal government's ability to deal with a known major problem before it reaches crisis proportions; (3) some social security proposals include adding individual accounts similar to defined contribution plans, to the current defined benefit program; (4) these individual accounts offer the potential for increased investment returns but they cannot by themselves restore social security's solvency without additional changes to the current system; (5) in assessing the proposals, policymakers must consider the extent to which the proposals offer sustainable financing for the system; (6) also, they must consider how to balance improvements in individual equity while maintaining adequacy of retirement income for those individuals who rely on social security as their primary or sole source of income; and (7) choosing whether to incorporate individual accounts into the social security system will require careful consideration of a number of design and implementation issues if such a system is to function effectively at a reasonable cost.
Created in 1789, Customs is one of the federal government’s oldest agencies. Customs is responsible for collecting revenue from imports and enforcing customs and related laws. Customs also processes persons, carriers, cargo, and mail into and out of the United States. In fiscal year 1997, Customs collected about $19 billion in revenues and processed about 18 million import entries; about 128 million vehicles and trucks; about 706,000 commercial aircraft; about 214,000 vessels; and about 442 million air, land, and sea passengers entering the country. Customs performs its mission with a workforce of about 19,500 personnel at its headquarters in Washington, D.C., and at 20 CMCs, 20 Special Agent-in-Charge offices, and 301 ports of entry around the country. At the end of fiscal year 1997, Customs had deployed 7,207 inspectors at these ports. This represented an increase of 17 percent over the level in fiscal year 1992, the earliest year for which complete data were available. The nine ports we visited or contacted—LAX Airport; Los Angeles/Long Beach Seaport; JFK Airport; New York/Newark Seaport; Newark International Airport; and the Houston and Detroit air and sea ports—were among the busiest of their kind in the United States in fiscal year 1997. According to Customs workload data, these ports accounted for about 31 percent of all air and sea passengers and about 19 percent of all cargo entries processed by Customs in fiscal year 1997. The ports also accounted for about 21 percent of all inspectors deployed by Customs at the end of fiscal year 1997. We were not able to perform the requested analyses to identify the implications of differences between assessed and actual inspectional personnel levels because Customs had not assessed the appropriate inspectional personnel levels for its ports. Customs had not done so because it does not have a systematic, agencywide process for assessing the need for inspectional personnel and allocating such personnel to process commercial cargo at air, sea, and land ports and to process passengers at sea and land ports. While Customs uses a quantitative model to determine the need for additional inspectional personnel to process air passengers, the model is not intended to establish the level at which airports should be staffed. Customs is in the early stages of responding to a recommendation in our April 1998 report that it establish an inspectional personnel needs assessment and allocation process. Inspectional personnel levels at the selected ports at the end of fiscal year 1997 were at or near the levels for which funds had been provided to the ports. According to Customs officials we interviewed at air and sea ports, these personnel levels, coupled with the use of overtime, enabled the ports to process commercial cargo and passengers within prescribed performance parameters. In our April 1998 report, we reported that Customs does not have a systematic, agencywide process for determining its need for inspectional personnel for processing commercial cargo and allocating such personnel to ports of entry nationwide. We also reported that, accordingly, Customs had not determined the appropriate inspectional personnel levels for each of its cargo ports and for its cargo processing functions. In addition, we reported that while Customs had moved in this direction since 1995 by conducting three assessments and two allocations, these assessements and allocations were limited because they (1) focused on the need for additional positions rather than first determining the feasibility of moving existing positions, Customs’ drug-smuggling initiatives rather than its overall cargo processing operations, and Southwest border ports and certain air and sea ports considered to be at risk from drug smuggling rather than all 301 ports; (2) used different assessment and allocation factors each year; and (3) were conducted with minimal involvement from nonheadquarters Customs units, such as CMCs and ports. Accordingly, we pointed out that focusing only on a single aspect of its operations (i.e., countering drug smuggling); not consistently including the key field components (i.e., CMCs and ports) in the personnel decisionmaking process; and using different assessment and allocation factors from year to year could prevent Customs from accurately estimating the need for inspectional personnel and then allocating them to ports. In its assessment for fiscal year 1997 (conducted in 1995), to estimate the number of inspectional personnel needed, Customs combined factors such as the need to (1) fully staff inspectional facilities and (2) balance enforcement efforts against violators with the need to move legitimate cargo and passengers through the ports. In its assessments for fiscal years 1998 and 1999 (conducted in 1996 and 1997, respectively), Customs used factors such as the number and location of drug seizures and the perceived threat of drug smuggling, including the use of rail cars to smuggle drugs. To allocate to the ports the inspectional personnel that were funded by Congress, Customs used factors such as (1) commercial cargo workloads and (2) specific aspects of the drug smuggling threat, such as attempts by private sector employees at sea and air ports to assist drug smuggling organizations in their efforts to smuggle drugs (described by Customs as “internal conspiracies”). Customs also does not have a systematic inspectional personnel assessment and allocation process for processing land passengers. In 1995, Customs assessed the need for additional inspectional personnel to process incoming land passengers but since then has not done such an assessment. As with the assessments for cargo processing, this assessment was limited to Southwest border ports to address drug smuggling and related border violence. The primary factor considered in this assessment was the physical configuration, i.e., the number of primary passenger lanes, of the ports involved. Customs has not assessed the need for inspectional personnel to process sea passengers. According to Customs officials at the Newark seaport, because of the cyclical nature of the sea passenger workload (in terms of the time of week and year), they did not assign inspectional personnel to process sea passengers on a full-time basis. The port assigned inspectional personnel from other functions, such as cargo processing, on an “as needed” basis to process sea passengers. Conversely, a Customs official at the Los Angeles/Long Beach seaport indicated that it would be operationally desirable to have dedicated inspectional personnel to process sea passengers that arrive on board cruise ships three days a week. This port also assigned inspectors to process sea passengers on an as needed basis. Unlike its cargo and other passenger processing functions, Customs has employed a quantitative model since 1993 to determine the need for additional inspectional personnel to process air passengers at the 16 largest international airports in the United States, including the 5 airports we visited or contacted. In developing its recommendations for inspectional personnel, the model utilized the following factors in its formula: (1) the number of arriving international passengers and the activities required to clear them for entry, (2) workforce productivity, (3) growth in workload, (4) the number of passenger terminals at each port, (5) enforcement risk (threat), and (6) the number of positions equivalent to the amount of overtime spent to operate a particular port. Table 1 shows the model’s recommendations for inspectional positions and Customs’ allocations of such positions to the five airports we visited or contacted for fiscal year 1998 and the recommendations for fiscal year 1999. Customs officials considered the model to be an analytical tool in their decisionmaking. As such, the model is not intended to establish the level at which airports should be staffed. Rather, the model’s results and recommendations are considered to be an indicator of the additional inspectional positions needed by each of the 16 ports, given the six factors discussed earlier that the model considers. The model’s results and recommendations are reviewed by Customs officials and are modified in two primary ways. First, Customs does not allocate all of the positions recommended for particular ports. According to Customs officials, because additional inspectional positions have generally not been available from regular (“Salaries and Expenses”) appropriations, Customs has provided additional positions to airports mainly by funding them through user-fee revenues. However, according to these officials, user-fee revenues each fiscal year were not sufficient to fund all of the positions the model estimated were needed. For example, for fiscal year 1998, a total of 142 additional positions were actually funded by user-fee revenues, while the model estimated that 288 additional positions were needed. The model recommended that out of the 288 estimated additional positions, JFK Airport needed 108 additional positions and LAX Airport needed 20 additional positions. As a result of internal reviews by Customs officials, JFK Airport was allocated 12 positions and LAX Airport was allocated 16 positions. For fiscal year 1999, the model recommended that out of the 175 total additional positions it estimated as needed, JFK Airport needed 88 additional positions. As discussed below, the model indicated that LAX Airport was overstaffed. As of August 1998, the allocation of inspectional personnel was pending the outcome of congressional appropriations for fiscal year 1999. The appropriations would determine the actual number of additional positions that could be funded. Second, Customs did not move existing positions from airports that the model indicated were overstaffed. For example, for fiscal year 1998, the model indicated that 4 airports were overstaffed by a total of 37 positions. For fiscal year 1999, the model indicated that LAX Airport was overstaffed by 8 positions and that 4 other airports were overstaffed by a total of 42 positions. In our April 1998 report, Customs officials stated that they generally did not reallocate existing inspectional personnel for several reasons, including legislative limitations placed on the movement of certain positions, such as those funded by user-fee revenues for specific purposes at specific locations. In addition, according to the Customs official who administers the model, primarily because the model did not take into account certain factors, such as sudden changes in airline markets, Customs did not plan to move positions from the ports that the model indicated were overstaffed. In our April 1998 report, we concluded that in order to successfully implement the Government Performance and Results Act of 1993 (the Results Act) (P.L. 103-62), Customs had to determine its needs for inspectional personnel for all of its operations and ensure that available personnel were allocated where they were needed most. Accordingly, we recommended that, as a sound strategic planning practice, Customs establish a systematic process that would properly align its inspectional personnel with its operational activity goals, objectives, and strategies. Customs’ Assistant Commissioner for Field Operations told us that, in part, as a result of reviewing the April 1998 report and its recommendation, Customs recognizes that staffing imbalances may exist at certain ports. In a June 1998 written response to our recommendation, Customs detailed the steps it was taking to implement it. Specifically, Customs indicated that it had awarded a contract for the development of a resource allocation model that would define the work of Customs’ core occupations and prioritize workload. The model also is to process data using performance measurement methodologies, be compatible with cost accounting and other management controls, and establish linkages between core occupations and support positions. Upon delivery of the model, Customs indicated it would customize a process for using it to meet changing personnel needs and new initiatives. The model is scheduled to be ready for implementation by fiscal year 1999. In conjunction with the development of the resource allocation model, Customs indicated that it was undertaking an initiative to assess and improve the quality of the data to be used in the model. Specifically, the initiative is to review and confirm data definitions and sources and assess the quality of the data. Table 2 shows the combined (cargo and passenger processing) onboard inspectional personnel levels at the end of fiscal year 1997 at the ports we visited or contacted. According to Customs officials at the ports, inspectors who are not funded by user-fee revenues often shift between cargo and passenger processing functions, depending on workload demands and the need to work overtime. Consequently, it could be difficult to establish the exact number of inspectors dedicated to each function at any given time. Accordingly, we did not separate the staffing levels by function. Table 2 also shows that the onboard personnel levels for each port were very near the levels funded from appropriations. According to Customs officials, under its current “staff-to-budget” procedures, rather than “authorized” levels, Customs tracks its personnel levels at ports through “tables-of-organization”—which reflect the number of positions that are funded at a particular port—and the number of personnel onboard. According to Customs officials at the ports we visited or contacted, the existing inspectional personnel above and the use of overtime funds enabled the ports to process arriving international passengers and cargo within the performance measures established by Customs for these functions in its strategic plan. The performance measure for processing air passengers requires that 95 percent of such passengers be cleared within 5 minutes from the time they retrieve their checked luggage, while the measure for air cargo (formal entries) requires that 99.6 percent of such entries be released in 1 day. We were not able to develop reliable workload-to-inspector ratios because we could not establish a sufficient level of assurance regarding the overall quality of the workload data to conduct further analyses. Specifically, we identified significant discrepancies in the workload data as reported from Customs headquarters and a CMC and ports for two ports. For example, headquarters workload data—considered by Customs to be the official data—showed that the Newark Seaport processed 154,206 sea passengers in fiscal year 1997. However, the port itself reported that it processed 186,957 passengers that same year. The data discrepancies for JFK Airport are discussed earlier in this report. We could not obtain specific reasons for these discrepancies without Customs having to conduct additional work. In addition, we could not identify any systematic internal controls over the accuracy and reliability of such data, either at Customs headquarters or at the CMCs and ports we visited or contacted. Workload is one of several factors that Customs considered in the assessments and allocations done over the past 3 years. According to Customs officials, the drug-smuggling threat—such as the use of rail cars to smuggle drugs—was the primary factor considered in these assessments and allocations. As discussed earlier, Customs also considered budgetary constraints and legislative limitations in its personnel assessment and allocation decisionmaking. Table 3 shows the cargo and passenger processing workloads for fiscal year 1997 at the selected ports we visited or contacted as reported by Customs headquarters. The cargo workload data are presented as totals of all types of entries, including formal entries, for each port. We could not perform the staffing and workload analyses requested by the Subcommittees because (1) Customs had not assessed the level of appropriate staffing at its ports and (2) of concerns about the quality of Customs’ workload data. In addition, Customs considered factors other than workload—such as budget constraints and legislative limitations—in determining its need for inspectional personnel and allocating such personnel to ports. According to Customs officials, these factors must be considered in their decisionmaking in order to maximize the effectiveness of deployed resources. Based on statements to us by senior Customs officials and their response to the recommendation in our April 1998 report, we believe that Customs has recognized that staffing imbalances may exist at certain ports and that it needs to improve the manner in which it assesses the need for and allocates inspectional personnel to ports of entry. Customs’ actions—the award of a contract to develop a resource allocation model and an initiative to improve the quality of data in its management database—are steps in the right direction to address the personnel assessment and allocation issues we identified during our work. Given these steps by Customs, we are not making any recommendations in this report. We requested comments on a draft of this report from the Secretary of the Treasury or his designees. On August 4, 1998, Customs’ Assistant Commissioner for Field Operations provided us with Customs’ comments on the draft. The Assistant Commissioner generally agreed with the information presented in the report and its conclusions and provided technical comments and clarifications, which we have incorporated in this report where appropriate. We are sending copies of this report to the Chairmen and Ranking Minority Members of the congressional committees that have responsibilities related to Customs, the Secretary of the Treasury, and the Acting Commissioner of Customs. Copies will also be made available to others on request. Major contributors to this report are listed in appendix II. If you have any questions or wish to discuss the information in this report, please contact Brenda J. Bridges, Assistant Director, on (202) 512-5081 or me on (202) 512-8777. Our objectives in this review were to analyze (1) the cargo and passenger inspectional personnel levels at selected airports and seaports around the United States and the implications of any differences between these levels and those determined by Customs to be appropriate for these ports (assessed levels) and (2) the cargo and passenger processing workloads and related workload-to-inspector ratios at the selected ports and the rationales for any significant differences in these ratios. To identify the cargo and passenger inspectional personnel levels at the selected ports and the implications of any differences between the assessed and actual personnel levels, we reviewed budget documents and summaries, staffing statistics, cargo and passenger processing performance data, and Customs’ strategic plan for fiscal years 1997 to 2002. We also interviewed Customs officials at headquarters, Customs Management Centers (CMC), and ports where we also observed cargo and passenger processing operations. In addition, we sought to determine how Customs assesses the need for inspectional personnel and allocates such personnel to ports of entry to process cargo and passengers. Accordingly, we reviewed documents related to Customs’ three assessments since 1995 focusing on its drug smuggling initiatives and documents related to Customs’ air passenger processing model, including a September 1992 report about the model done for Customs by two consulting firms. We did not independently assess the validity and reliability of the air passenger processing model or its results. However, we conducted a limited review of the consultants’ report and discussed its findings and recommendations—and Customs’ responses to them—with cognizant Customs officials. Because of the similarities in the subject matter, we relied extensively on information in our April 1998 report that focused on Customs’ inspectional personnel assessment and allocation processes for commercial cargo ports. To identify the cargo and passenger processing workloads and any related workload-to-inspector ratios at the selected ports and the rationales for any significant differences in these ratios, we obtained and reviewed workload data from Customs headquarters, CMCs, and ports. Given time constraints, we did not independently verify the accuracy and reliability of Customs’ workload data. However, to obtain some indication of the overall quality of these data, we sought to identify whether Customs had in place any procedures for verifying data. Customs officials could not identify any formal, systematic procedures to verify data quality. Some port officials told us that they informally monitored data in management reports to detect potential errors. In addition, we compared workload data obtained from headquarters, CMCs, and ports and identified several discrepancies, such as those in the number of cargo entries at John F. Kennedy International (JFK) Airport. While Customs officials said they could not explain specific discrepancies in the data without conducting lengthy additional work, they provided some general reasons that could potentially explain the discrepancies. These reasons included the possibility that some ports tracked workload data differently from Customs headquarters. We visited the CMCs in Los Angeles and New York and the Los Angeles International Airport (LAX), Los Angeles/Long Beach Seaport, JFK Airport, New York/Newark Seaport, and Newark International Airport, which, although not part of our original scope, we visited due to its proximity to the seaport—at the request of the Subcommittees. We subjectively selected both the airports and seaports each in Houston and Detroit and telephonically interviewed cognizant officials from these ports in response to the Trade Subcommittee’s request following our May 21, 1998, briefing that we expand the geographic scope of our work to include ports along the Northern and Southern borders of the United States. As discussed earlier, the nine ports we visited or contacted were among the busiest of their kind in the United States in fiscal year 1997. JFK Airport was the busiest in terms of passenger workload and the second busiest in terms of cargo workload and had flights arriving from all over the world. The Newark Airport, while seventh in terms of passenger workload, has been experiencing rapid growth. Specifically, the number of passengers arriving at the airport had grown by 67 percent between fiscal years 1992 and 1997, while the number of arriving flights had grown by 30 percent during the same period. The New York/Newark Seaport was the second busiest in terms of cargo workload, which was expected to grow by over 10 percent annually for the next 4 years. The Los Angeles/Long Beach Seaport was the busiest in terms of cargo workload, collecting 18 percent—about $4 billion—of the duties, fees, and taxes collected by Customs nationwide in fiscal year 1997. LAX Airport was the third busiest in terms of passenger workload and fifth busiest in terms of cargo workload. For example, over 7 million passengers and 41,000 flights were cleared through LAX Airport in fiscal year 1997. The Houston/Galveston Seaport was the eighth busiest in terms of cargo processing, while the Houston Airport was the eighth busiest in terms of passenger processing. The airport’s workload had grown by between 12 to 15 percent annually over the past 2 to 3 years. The Detroit Airport was the 13th busiest in terms of passenger processing, while the seaport processed a relatively small number of cargo entries and vessel crew. The results related to inspectional staffing levels and cargo and passenger workloads apply only to the five ports we visited and the four ports we telephonically contacted and cannot be generalized to all Customs ports. Brenda J. Bridges, Assistant Director Barry J. Seltser, Assistant Director, Design, Methodology, and Technical Assistance Seto J. Bagdoyan, Evaluator-in-Charge Sidney H. Schwartz, Senior Statistician, Design, Methodology, and Technical Assistance Donald E. Jack, Evaluator Wendy C. Simkalo, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed certain aspects of the Customs Service's inspectional personnel and its commercial cargo and passenger workloads, focusing on: (1) the implications of any differences between the cargo and passenger inspectional personnel levels at selected airports and seaports around the United States and those determined by Customs to be appropriate for these ports (assessed levels); and (2) any differences among the cargo and passenger processing workload-to-inspector ratios at the selected ports and the rationales for any significant differences in these ratios. GAO noted that: (1) it was not able to perform the requested analyses to identify the implications of differences between assessed and actual inspectional personnel levels because, as GAO reported in April 1998, Customs had not assessed the appropriate inspectional personnel levels for its ports; (2) in that report, GAO determined that Customs does not have a systematic agencywide process for assessing the need for inspectional personnel and allocating such personnel to commercial cargo ports; (3) Customs also does not have such a process for assessing the need for inspectional personnel to process land and sea passengers at ports; (4) while Customs uses a quantitative model to determine the need for additional inspectional personnel to process air passengers, the model is not intended to establish the level at which airports should be staffed, according to Customs officials; (5) Customs is in the early stages of responding to a recommendation in GAO's April 1998 report that it establish an inspectional personnel needs assessment and allocation process; (6) Customs officials GAO interviewed at air and sea ports told GAO that the current personnel levels, coupled with the use of overtime, enabled the ports to process commercial cargo and passengers within prescribed performance parameters; (7) the inspectional personnel data that GAO obtained for the selected ports showed that at the end of fiscal year 1997, the personnel levels at these ports were at or near the levels for which funds were provided to the ports; (8) GAO was also not able to perform the analyses to identify workload-to-inspector ratios and rationales for any differences in these ratios because it did not have a sufficient level of confidence in the quality of the workload data; (9) GAO identified significant discrepancies in the workload data it obtained from Customs headquarters, a Customs Management Center (CMC) and two ports; (10) data from the New York CMC indicated that these airports processed about 1.5 million formal entries alone, almost 100,000 entries more than the number headquarters had for all entries at these ports; (11) workload was only one of several factors considered by Customs in the few assessments--which focused on its drug smuggling initiatives--completed since 1995 to determine its needs for additional inspectional personnel and allocate such personnel to ports; and (12) Customs also considered factors such as the threat of drug smuggling, budgetary constraints, and legislative limitations.
As it does now, the United States will fund its share of NATO enlargement primarily through contributions to the three common budgets. NSIP pays for infrastructure items that are over and above the needs of the member nations, including communications links to NATO headquarters or reinforcement reception facilities, such as increased apron space at existing airfields. The military budget pays for NATO Airborne Early Warning Force program and military headquarters costs, and the civil budget pays primarily for NATO’s international staff and operation and maintenance costs of its civilian facility in Brussels. For fiscal year 1997, the U.S. contribution for the three common budgets was about $470 million: $172 million for the NSIP, $252 million for NATO’s military budget, and $44.5 million for NATO’s civil budget. Any increases to the U.S. budget accounts would be reflected primarily through increased funding requests for the DOD military construction budget from which the NSIP is funded, the Army operations and maintenance budget from which the military budget is funded (both part of the National Defense 050 budget function), and the State Department’s contributions to international organizations from which the civil budget is funded (part of the International Affairs 150 budget function). While NATO will not have finalized its common infrastructure requirements for new members until December 1997 or decided whether or how much to increase the common budgets until June 1998, DOD and State Department officials told us that the civil and NSIP budgets are likely to increase by only 5 to 10 percent and the military budget will probably not increase at all. This would mean an increase of about $20 million annually for the U.S. contribution to NATO. However, as we indicated, NATO has yet to make decisions on these matters. In addition, the United States could choose to help new members in their efforts to meet their NATO membership obligations through continued Foreign Military Financing grants and/or loans, International Military Education and Training grants, and assistance for training activities. The three candidate countries and other PFP countries have been receiving assistance through these accounts since the inception of the PFP program, and this has enabled some of these countries to be more prepared for NATO membership. In fiscal year 1997, over $120 million was programmed for these activities, and about $60 million of this amount went to the three candidates for NATO membership. Any increased funding for such assistance would be funded through the International Affairs and Defense budget functions. It is through NATO’s defense planning process that decisions are made on how the defense burden will be shared, what military requirements will be satisfied, and what shortfalls will exist. NATO’s New Strategic Concept, adopted in Rome in 1991, places greater emphasis on crisis management and conflict prevention and outlines the characteristics of the force structure. Key features include (1) smaller, more mobile and flexible forces that can counter multifaceted risks, possibly outside the NATO area; (2) fewer troops stationed away from their home countries; (3) reduced readiness levels for many active units; (4) emphasis on building up forces in a crisis; (5) reduced reliance on nuclear weapons; and (6) immediate and rapid reaction forces, main defense forces (including multinational corps), and augmentation forces. Although NATO has not defined exactly the type and amount of equipment and training needed, it has encouraged nations to invest in transport, air refueling, and reconnaissance aircraft and improved command and control equipment, among other items. NATO’s force-planning and goal-setting process involves two interrelated phases that run concurrently: setting force goals and responding to a defense planning questionnaire. The force goals, which are developed every 2 years, define NATO’s requirements. The major NATO commanders propose force goals for each nation based on command requirements. Each nation typically has over 100 force goals. NATO and national officials frequently consult one another while developing force goals and national defense plans. NATO commanders are unlikely to demand that member nations establish units or acquire equipment they do not have. In its annual response to NATO’s defense planning questionnaire, each member verifies its commitment for the previous year, defines its commitment for the next year, and lays out plans for the following 5 years. Alliance members review each nation’s questionnaire and, in meetings, can question national plans and urge member nations to alter their plans. After finishing their reviews, generally in October or November, NATO staff write a report summarizing each nation’s plans and assessing national commitments to NATO. Once NATO members approve this report, it becomes the alliance’s consensus view on each country’s strengths and weaknesses and plan to support the force structure. It is through this process that NATO determines what shortfalls exist, for example, in combat support and combat service support capabilities. According to U.S. officials, NATO is preparing several reports to be presented for approval at the defense ministerial meetings in December 1997. One report will discuss the additional military capability requirements existing alliance members will face as a result of the alliance’s enlargement. According to officials at the U.S. mission and Supreme Headquarters Allied Powers Europe, it is unlikely that any additional military capability requirements will be placed on NATO members over and above the force goals they have already agreed to provide. In other words, if current force goals are attained, NATO will have sufficient resources to respond to likely contingencies in current and new member countries. Therefore, it can be concluded that although enlargement of the alliance is another reason for current allies to attain their force goals, it will not add any new, unknown costs to existing members’ force plans. Other reports resulting from this process will discuss the requirements for commonly funded items in the new nations and their estimated costs. These items include infrastructure that will enable the new allies to receive NATO reinforcements in times of crisis, communication systems between NATO and their national headquarters, and a tie-in to NATO’s air defense system. How these projects will be financed by NATO, for example, whether they will be financed within existing budgets or by increasing the size of NATO’s common budgets, will not be determined until June 1998. Therefore, the impact of these costs on the U.S. contributions to NATO’s common budgets and the U.S. budget will be unknown until next spring. Another report will present an assessment of the capabilities and shortfalls in the military forces of Poland, Hungary, and the Czech Republic. NATO does not and will not estimate the costs of the shortfalls of either the current or the new member states, but once these shortfalls are identified, cost estimates can be made by others. However, even though new members’ capabilities and shortfalls will be identified in December, these countries’ force goals will not be set until the spring. These force goals will, in effect, be a roadmap for the new members on how to address their shortfalls. (See app. I for a timeline illustrating these events.) When the DOD, CBO, and Rand studies were completed, many key cost determinants had not been established. Consequently, each study made a series of key assumptions that had important implications for each studies’ results. DOD made the following key assumptions: Specific nations would be invited to join NATO in the first round of enlargement. NATO would continue to rely on its existing post-Cold War strategy to carry out its collective defense obligations (that is, each member state would have a basic self-defense capability and the ability to rapidly receive NATO reinforcements). NATO would not be confronted by a significant conventional military threat for the foreseeable future, and such a threat would take many years to develop. NATO would continue to use existing criteria for determining which items would be funded in common and which costs would be allocated among members. Using these assumptions, DOD estimated the cost of enlarging NATO would range from about $27 billion to $35 billion from 1997 to 2009. The estimate was broken down as follows: about $8 billion to $10 billion for improvements in current NATO members’ regional reinforcement capabilities, such as developing mobile logistics and other combat support capabilities; about $10 billion to $13 billion for restructuring and modernizing new members’ militaries (for example, selectively upgrading self-defense capabilities); and about $9 billion to $12 billion for costs directly attributable to NATO enlargement (for example, costs of ensuring that current and new members’ forces are interoperable and capable of combined NATO operations and of upgrading or constructing facilities to receive NATO reinforcements). DOD estimated the U.S. share of these costs would range from about $1.5 billion to $2 billion—averaging $150 million to $200 million annually from 2000 to 2009. The estimated U.S. share chiefly consisted of a portion of direct enlargement costs commonly funded through NATO’s Security Investment Program. DOD assumed that the other costs would be borne by the new members and other current member states and concluded that they could afford these costs, although this would be challenging for new members. (See app. II.) In our review of DOD’s study of NATO enlargement, we (1) assessed the reasonableness of DOD’s key assumptions, (2) attempted to verify pricing information used as the basis for estimating enlargement costs, (3) looked into whether certain cost categories were actually linked to enlargement, and (4) identified factors excluded from the study that could affect enlargement costs. We concluded that DOD’s assumptions were reasonable. The assumption regarding the threat was probably the most significant variable in estimating the cost of enlargement. Based on information available to us, we concluded that it was reasonable to assume the threat would be low and there would be a fairly long warning time if a serious threat developed. This assumption, and the assumption that the post-Cold War strategic concept would be employed, provided the basis for DOD’s judgments concerning required regional reinforcement capabilities, new members’ force modernization, and to a large extent those items categorized as direct enlargement costs. DOD also assumed that during 1997-2009, new members would increase their real defense spending at an average annual rate of 1 to 2 percent. Both private and government analysts project gross domestic product (GDP) growth rates averaging 4 to 5 percent annually for the Czech Republic, Hungary, and Poland during 1997-2001. Thus, projected increases in defense budgets appear affordable. Analysts also point out that potential new member countries face real fiscal constraints, especially in the short term. An increase in defense budgets at the expense of pressing social concerns becomes a matter of setting national priorities, which are difficult to predict. If these countries’ growth rates do not meet expectations, their ability to increase real defense spending becomes more problematic. DOD further assumed that current NATO members would on average maintain constant real defense spending levels during 1997-2009. Analysts have expressed somewhat greater concern about this assumption and generally consider it to be an optimistic, but reasonable projection. Some analysts indicated that defense spending in some current member states may decline further over the next several years. Such declines would partly be due to economic requirements associated with entry into the European Monetary Union. Despite our conclusion that DOD’s underlying assumptions were sound, for several reasons we concluded that its estimates are quite speculative. First, DOD’s pricing of many individual cost elements were “best guesses” and lacked supporting documentation. This was the case for all three categories of costs: direct enlargement costs, current members’ reinforcement enhancements, and new members’ modernization requirements. Most of the infrastructure upgrade and refurbishment cost estimates were based on judgments. For example, DOD’s estimate of $140 million to $240 million for upgrading a new member’s existing air base into a NATO collocated operating base was not based on surveys of actual facilities but on expert judgment. We were told that the actual cost could easily be double—or half—the estimate. DOD’s estimated costs for training and modernization were notional, and actual costs may vary substantially. DOD analysts did not project training tempos and specific exercise costs. Instead, they extrapolated U.S. and NATO training and exercise costs and evaluated the results from the point of view of affordability. DOD’s estimate for modernization and restructuring of new members’ ground forces was also notional and was based on improving 25 percent of the new members’ forces. However, it did not specify what upgrades would be done and how much they would cost. Second, we could find no linkage between DOD’s estimated cost of $8 billion to $10 billion for remedying current shortfalls in NATO’s reinforcement capabilities and enlargement of the alliance. Neither DOD nor NATO could point to any specific reinforcement shortfalls that would result from enlargement that do not already exist. However, existing shortfalls could impair the implementation of NATO’s new strategic concept. DOD officials told us that while reinforcement needs would not be greater in an enlarged NATO, enlargement makes eliminating the shortfalls essential. This issue is important in the context of burdensharing because DOD’s estimate shows that these costs would be covered by our current NATO allies but not shared by the United States. Finally, NATO has yet to determine what military capabilities, modernization, and restructuring will be sought from new members. Consequently, DOD had little solid basis for its $10 billion to $13 billion estimate for this cost category. Moreover, DOD and new member governments have noted that new members are likely to incur costs to restructure and modernize their forces whether or not they join NATO. Indeed, some countries have indicated that they may need to spend more for these purposes if they do not become NATO members. DOD showed these costs as being covered entirely by the new members. NATO enlargement could entail costs in addition to those included in DOD’s estimates, including costs for assistance to enhance the PFP or other bilateral assistance for countries not invited to join NATO in July 1997. In addition, the United States may provide assistance to help new members restructure and modernize their forces. For example, Polish officials said they may need up to $2 billion in credits to buy multipurpose aircraft. While not an added cost of enlargement, such assistance would represent a shift in the cost burden from the new member countries to the countries providing assistance. DOD did not include such costs in its estimate of the U.S. share, though it acknowledged that the cost was possible. Moreover, U.S. and NATO officials have stated that additional countries may be invited to join NATO in the future, most likely in 1999. DOD’s cost estimate did not take into account a second or third round of invitations. If additional countries are invited, cost of enlargement would obviously increase. CBO and Rand estimated the cost of incorporating the Czech Republic, Hungary, Poland, and Slovakia into NATO. They based their estimates on a range of NATO defense postures, from enhanced self-defense with minimal NATO interoperability to the forward stationing of NATO troops in new member states. However, they also noted that the current lack of a major threat in Europe could allow NATO to spend as little as it chose in enlarging the alliance. Because of the uncertainties of future threats, and the many possible ways to defend an enlarged NATO, CBO examined five illustrative options to provide such a defense. Each option built on the pervious one in scope and cost. CBO estimated that the cost of the five options over the 15-year period would range from $61 billion to $125 billion. Of that total, CBO estimated that the United States might be expected to pay between $5 billion and $19 billion. CBO included in its range of options a $109-billion estimate that was predicated on a resurgent Russian threat, although it was based on a self-defense and reinforcement strategy similar to that used by DOD. Of this $109 billion, CBO estimated that the United States would pay $13 billion. Similarly, Rand developed estimates for four options to defend an enlarged NATO that build upon one another, from only self-defense support at a cost of $10 billion to $20 billion to the forward deployment of forces in new member states at a cost of $55 billion to $110 billion. These options include a middle option that would cost about $42 billion that was also based on a self-defense and reinforcement strategy. Rand estimated that the United States would pay $5 billion to $6 billion of this $42 billion in total costs. Several factors account for the differences between DOD’s estimates and the CBO and Rand estimates, even those that employed defense strategies similar to DOD’s. (App. III illustrates the major results and key assumptions of the three estimates.) CBO’s cost estimate is significantly higher than DOD’s for the following reasons: DOD assumed reinforcements of 4 divisions and 6 wings, whereas CBO assumed a force of 11-2/3 divisions and 11-1/2 wings and a much larger infrastructure for this force in the new member states. CBO’s modernization costs are much higher than DOD’s and include the purchase of 350 new aircraft and 1,150 new tanks for the new member states. DOD assumed that about 25 percent of the new member states’ ground forces would be modernized through upgrades and that each nation would procure a single squadron of refurbished Western combat aircraft. CBO assumed much higher training costs, $23 billion, which include annual, large-scale combined exercises. DOD included $2 billion to $4 billion for training. CBO included the purchase of Patriot air defense missiles at a cost of $8.7 billion, which is considerably higher than DOD’s assumed purchase of refurbished I-HAWK type missiles at $1.9 billion to $2.6 billion. CBO’s infrastructure costs were much higher than DOD’s and included new construction, such as extending the NATO fuel pipeline, which CBO assumed would meet U.S. standards. DOD assumed planned refurbishment of existing facilities that would meet minimal wartime standards. Rand’s cost estimate is somewhat higher than DOD’s, although both were based on similar threat assessments. First, its reinforcement package was larger—5 divisions and 10 wings—and therefore infrastructure costs were higher. Second, it assumed new members would purchase the more expensive Patriot air defense system rather than the refurbished I-HAWKs. Finally, it assumed greater training costs than did DOD. The author of the Rand study stated that if he had used DOD’s assumptions, the cost range would have been almost identical to DOD’s. Mr. Chairman, this concludes our prepared remarks. We would be happy to answer any questions you or the Committee members may have. NATO issues study on enlargement. NATO issues invitations to Poland, Hungary, and the Czech Republic to begin accessions talks. NATO prepares several reports: additional military capability requirements for existing alliance members that will result from the alliance’s enlargement; requirements for commonly funded items in the new member nations, including infrastructure that will enable the new allies to receive NATO reinforcements in times of crisis, communication systems between NATO and their national headquarters, and a tie-in to NATO’s air defense system; cost estimates for items eligible for common funding presented by NATO officials; and the capabilities and shortfalls in the military forces of Poland, Hungary, and the Czech Republic. NATO defense ministerial meeting to approve the above reports. New members’ force goals set. NATO decides whether or how much to increase the common budgets, which would then be shared among current and new members. Target date for new member accession into NATO. $27-$35 in constant 1997 dollars $61-$125 in constant 1997 dollars ($109 for a defense strategy similar to DOD’s) $10-$110 in constant 1996 dollars ($42 for a defense strategy similar to DOD’s) A small group (details classified) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on issues related to the cost and financial obligations of expanding the North Atlantic Treaty Organization (NATO), focusing on: (1) current U.S. costs to support NATO's common budgets and other funding that supports relations with central and east European nations and promotes NATO enlargement; (2) NATO's defense planning process, which will form the basis for more definitive cost estimates for an enlarged alliance; and (3) GAO's evaluation of the recent Department of Defense (DOD) study of NATO expansion and a comparison of DOD's study with studies of the Congressional Budget Office (CBO) and the Rand Corporation. GAO noted that: (1) the ultimate cost of NATO enlargement will be contingent on several factors that have not yet been determined; (2) NATO has yet to formally define its future: (a) strategy for defending the expanded alliance; (b) force and facility requirements of the newly invited states; and (c) how costs of expanding the alliance will be financed; (3) also unknown is the long-term security threat environment in Europe; (4) NATO's process for determining the cost of enlargement is under way and expected to be completed by June 1998; (5) in fiscal year 1997, the United States contributed about $470 million directly to NATO to support its three commonly funded budgets, the NATO Security Investment Program (NSIP), the military budget, and the civil budget; (6) this is about 25 percent of the total funding for these budgets; (7) it is through proposed increased to these budgets, primarily the NSIP and to a lesser extent the civil budget, that most of the direct cost of NATO enlargement will be reflected and therefore where the United States is likely to incur additional costs; (8) over $120 million was programmed in fiscal year 1997 for Warsaw Initiative activities in the three countries that are candidates for NATO membership and other Partnership for Peace (PFP) countries; (9) this money was provided to help pay for Foreign Military Financing grants and loans, exercises, and other PFP-related activities; (10) funding for these activities will continue, but the allocation between the candidates for NATO membership and all other PFP participants may change over time; (11) this funding is strictly bilateral assistance that may assist the candidate countries and other countries participating in PFP to meet certain NATO standards, but it is not directly related to NATO decisions concerning military requirements or enlargement; (12) GAO's analysis of DOD's cost estimate to enlarge NATO indicates that its key assumptions were generally reasonable and were largely consistent with the views of U.S., and NATO, and foreign government officials; (13) the assumption that large-scale conventional security threats will remain low significantly influenced the estimate; (14) DOD's lack of supporting cost documentation and its decision to include cost elements that were not directly related to enlargement call into question its overall estimate; (15) because of the uncertainties associated with enlargement and DOD's estimating procedures, the actual cost of NATO enlargement could be substantially different from DOD's estimated cost of about $27 billion to $35 billion; and (16) Rand and CBO cost estimates are no more reliable than DOD's.
The tax code provides for 35 different categories of exempt organizations, each covering one or more types of permissible activities. The majority of these organizations are covered by section 501 of the IRC. Section 501 includes private foundations and public charities, as well as other organizations, such as social welfare organizations, business leagues, and veterans’ organizations. However, other types of entities are also wholly or partially tax exempt, such as farmers’ cooperatives and political organizations, as are education-oriented programs such as educational savings accounts and tuition programs. Most organizations seeking exemption must submit an application to IRS. If the information in the application meets the requirements for tax exempt status, IRS will issue a determination letter approving tax exempt status. Tax exempt organizations are generally not required to file an income tax return. Instead, if an organization normally has $50,000 or more in gross receipts, and meets other requirements, it must annually file one of three versions of the Form 990 information return, which require information on employees, revenue and income, assets and liabilities, program activities, and compensation. Most organizations that fall below the gross receipt threshold of $50,000 and need not file a Form 990 information return are required instead to file an electronic postcard, Form 990-N, which asks for names and contacts associated with the organization, and confirmation of the organization’s annual gross receipts. During examinations, EO reviews a specific return—such as a Form 990 or employment tax return—as well as the organization’s activities. There can be open examinations on one organization spanning several years, or related returns for a single tax period for that same organization. Some examinations are a result of compliance projects that IRS initiates to identify areas of noncompliance or address known areas of noncompliance. For example, a project on gaming in charity fundraising activities, an activity often conducted by veterans organizations, led to a relatively high examination rate—compared with other rates by tax- exempt status—for those organizations in fiscal year 2014. Table 1 shows the examination rate by organization exempt status for fiscal year 2014. EO’s process for selecting returns for examination is complex and includes multiple steps (see figure 1). Referrals, which are complaints against exempt organizations submitted to EO, involve additional steps; they are excluded from figure 1 and discussed later in this report. EO uses a variety of sources to identify exempt organizations for possible examination, and conducts review steps that filter out some organizations. Through ongoing programs, time-limited projects, and data queries, EO identifies organizations with characteristics that may pose high noncompliance risks. Referrals point EO toward particular organizations that may be noncompliant. In addition, EO reviews organizations’ claims for tax refunds and examines those that are questionable; EO selects examination cases for training examiners; and some cases are selected based on programs run by other parts of IRS that relate to exempt organizations. EO also identifies some organizations for examination in the course of conducting other examinations. For a breakdown of closed examinations by source during fiscal year 2014, see table 2. Primary examination sources are listed in the rows, and examinations that are initiated based on other examinations are reflected in the third and fourth columns. Ongoing programs. Every year, EO identifies organizations with characteristics that are known to pose a risk of noncompliance. For example, through a document matching program, EO matches wages reported on Form W-2, Wage and Tax Statement, to those reported on exempt organizations’ employment tax returns; organizations with mismatches have an increased probability of noncompliance.Additionally, ongoing programs are used in EO’s oversight role. For example, EO performs annual reviews of a random sample of organizations that recently received favorable determination letters to check if the organizations are current with their filing requirements and are operating in accordance with their tax exempt purposes. Compliance projects are time-limited efforts to study noncompliance risks. Through compliance projects, EO identifies specific areas of potential noncompliance (such as fundraising) or specific types of organizations (such as community foundations), selects and reviews a subset of relevant organizations, and addresses any noncompliance it finds. Compliance projects identify organizations for review using data queries, random samples, and/or nonprobability samples. A committee of TE/GE officials, including EO managers and analysts, has historically been tasked with developing new compliance projects and obtaining approval from EO executives. Projects that identify areas of significant noncompliance—based on the nature of the issues involved and the number of examinations that result in tax assessments or organization status changes—become part of EO’s ongoing programs. However, in the past few years, EO has decreased its focus on compliance projects and developed a new focus on Form 990 analytics queries (see below). Form 990 analytics queries identify organizations for review with data queries on Form 990, Return of Organization Exempt from Income Tax. For example, some queries check for missing Form 990 schedules that should be filed, based on boxes checked on Form 990 or Form 990 responses above certain dollar thresholds. To assure alignment with IRS objectives, EO is in the process of aligning Form 990 analytics queries with five focus areas that correspond to agency-wide IRS objectives. EO initiated the Form 990 analytics strategy in 2010, and developed and began testing about 150 data queries by 2012. Of the original 150 Form 990 analytics queries, EO had completed testing for 11 and was in the process of testing 20 others, as of April 2015. Successful queries are run on an ongoing basis. As with compliance projects, queries that identify areas of significant noncompliance—based on the nature of the issues involved and the number of examinations that result in tax assessments or organization status changes—become part of EO’s ongoing programs. Referrals are complaints of exempt organization noncompliance made by third parties, including the public and other parts of EO and IRS. (Although referrals can be made from sources within EO, 90 percent of referrals are from sources external to EO.) Referrals will be discussed in greater detail later in this report. Examinations conducted in conjunction with other IRS divisions. EO conducts some examinations as part of programs managed by other divisions of IRS. For example, EO conducts some examinations for the Global High Wealth program, managed by IRS’s Large Business and International Division, which monitors high wealth individuals and the networks of enterprises and entities they control. Additionally, EO conducts some examinations of exempt organizations for the National Research Program, an IRS-wide effort to develop and monitor measures of taxpayer compliance run by IRS’s Office of Research, Analysis, and Statistics. IRS initiated a National Research Program study on employment tax noncompliance in 2010, focusing on Form 941, Employer’s Quarterly Federal Tax Return, for tax years 2008, 2009, and 2010. Training examinations, used for training examiners, are identified through database queries selecting for lower-grade cases with compliance issues, and cases with particular topics relevant to training. Claims are exempt organizations’ requests for tax refunds, adjustments of tax paid, or credits not previously reported or allowed. For example, recognition of tax-exempt status occurs after an organization was formed, effective to the formation date. As a result, an organization can file a claim for refund of income taxes paid for the period for which its exempt status is recognized. Most claims are allowed in full, but claims that raise questions may be considered for examination. In the course of examining an exempt organization’s tax return, EO examiners may become aware of other tax returns that are at risk for noncompliance and should be examined. EO has two sets of procedures in place for opening examinations on those returns when this occurs. Related pickups. Examiners may expand the examination of a return to include the organization’s tax returns for prior or subsequent years. They may also expand the examination to include different forms filed by the organization—for example, by expanding the examination of a Form 990 to include Form 941, Employer’s Quarterly Federal Tax Return. Examination staff must obtain manager approval for these examinations. Substitutes for return. When beginning an examination of a particular tax return, EO examiners check whether the organization is current in its filing requirements for all its returns. If EO examiners find that a return is missing, and are unable to secure the return through contact with the organization, they may prepare a blank, “dummy” return called a substitute for return (necessary because IRS tracks its examinations based on returns). The organization’s related activities, records, and/or documents may then be examined. For some cases, EO’s Exempt Organizations Compliance Area (EOCA) may conduct a compliance check or compliance review, processes that can serve as intermediate research steps (see table 3). Some organizations are filtered out through these steps, some are brought into compliance and do not require further work, and some are sent for classification (see below). Other cases are not sent for intermediate research and are sent directly for classification, or directly to Case Selection and Delivery, which manages the pool of returns that may be sent for examination. Compliance checks involve contact with an exempt organization through a letter or questionnaire. For example, in fiscal year 2014, EOCA sent compliance check letters to organizations that did not file a required Form 990, and to organizations that reported certain types of income on their Form 990 but did not file related required forms. These contacts are a form of education for an organization and can result in the organization coming into compliance with requirements—for example, by filing a required return. Compliance checks may also be used to determine whether an organization is adhering to record-keeping requirements, and they are used for document matching cases. With some compliance checks, the contact is sufficient to bring an organization into compliance, or to determine that it was already in compliance. Other cases may be sent to EOCA’s classification group or a triage team (see below), which determines which cases meet criteria for examination. Compliance reviews do not involve contacting the reviewed organization. EOCA conducts research using IRS data, external databases, and publicly available information, including information on the Internet. Items reviewed may include tax returns, applications for exemption, and websites. For example, compliance reviews are used for the program mentioned above that reviews organizations that were recently granted exempt status, to determine whether they are now operating in accordance with their exempt purposes. Some cases identified through a compliance project may be sent straight to Case Selection and Delivery, the pool of returns that may be sent for possible examination (see below). For other cases, an intermediate research step may or may not be conducted (see above), and the next step is classification, a review of the examination potential of a return. Claims Classification. Classifiers apply their experience and technical expertise to determine whether a claim is allowable by reviewing taxpayer documentation. With management approval, claims classifiers recommend processing refunds, credits, or adjustments for claims which are clearly allowable, as most claims are. Claims that raise questions are sent for possible examination. Other Classification. Classification of most non-claims cases is done by experienced examiners who conduct further research on an organization. They decide which returns to filter out and which to send forward for possible examination, based on their professional judgment of the likelihood of noncompliance and the significance of the identified issues. Returns are classified by one of two groups, EOCA classification and Exempt Organizations Examinations classification, depending on how the case originated and the details of the case. The two groups use different criteria and procedures, but have similar functions and results; classification by either group may result in a correspondence examination, a field examination, or accepting a return as filed and filtering it out from further review. Triage Teams. For some EOCA compliance projects, examination selection decisions are made using project-specific criteria applied by project-specific teams, called triage teams, instead of through the usual classification groups. For example, compliance projects that use questionnaires may have triage teams compare questionnaire responses with Form 990 data to select organizations for examination. Triage teams look at the compliance check or compliance review results and apply the project-specific criteria to determine which returns to filter out, and which to send forward for possible examination. Case Selection and Delivery. After identification of organizations for possible examination, and after any intermediate research steps and classification, most returns that are selected for possible examination are sent to EO’s Case Selection and Delivery unit. (Returns that were classified by EOCA generally skip this step and may be sent straight to examination by EOCA examiners.) At Case Selection and Delivery, the returns become part of a pool of returns that may be sent for examination. Claims that raise questions (see above) are sent for examination. High- priority and certain other referrals (discussed below) are also sent for examination, according to EO officials. Aside from these referrals and claims, decisions are made based on available resources. Different examinations must be conducted by different grades of examination staff, depending on the nature of the issues involved and the level of income on the return; large and complex organizations are examined through EO’s Additionally, field examinations generally Team Examination Program.involve in-person contact and so must be conducted in the geographic area of the exempt organization. Examination offices tell the Case Selection and Delivery unit the grades of examiners available. The unit then sends returns to the offices based on those grades and, in the case of field exams, on the locations of the offices. In fiscal year 2014, 93 percent of returns sent to the Case Selection and Delivery Unit were ultimately sent to examination offices. Dismissals. After selected returns arrive at examination offices, managers and examiners conduct risk assessments on the returns. They may choose not to conduct the examinations, if returns seem to pose limited noncompliance risk or for other reasons. In this report we use the term “dismissed” to refer to such returns. There were 1,858 returns dismissed in fiscal year 2014, for reasons summarized in table 4. The most frequent reason for dismissing a return was that the return was approaching its statute of limitations; IRS must ensure there is adequate Other reasons for dismissal included time to complete the examination.lack of examination potential, and finding no concerns with a claim for a tax refund. A manager may make the decision to dismiss a return if it has not yet been assigned to an examiner. Examiners who identify returns that they believe should be dismissed are required to fill out a form stating the reason and have that form signed by their manager. EO has special procedures for processing referrals—complaints of potential noncompliance of exempt organizations—and selecting organizations for possible examination based on referrals. Referrals are the third largest source of EO examinations. EO receives referrals from many sources. They may originate externally—most commonly from the general public—or from other IRS divisions that identify potential noncompliance. Figure 2 summarizes the sources of referrals received in fiscal year 2014; data from recent years showed similar breakdowns. Processing referrals involves several steps, depending on the allegation in the referral or the type of organization involved. Figure 3 summarizes the steps for different referral types. Referral classification. EO currently has five classifiers, part of Exempt Organization Examinations, who sort incoming referrals into basic categories, based on an initial review of the referral. All referrals are to be logged into the Referral Database. Referrals are sorted to identify those that do not involve exempt organizations and therefore should go to other IRS divisions (misroutes), referrals that do not mention an organization or a violation of the tax code (“no issue” referrals), and referrals that should be classified for possible examination. After sorting, EO sends an acknowledgment letter to the individual who submitted the referral (except IRS employees). Each of these classifiers specializes in one or more types of referrals, such as political activity referrals, and reviews those referrals for examination potential, according to the EO referrals manager. The majority of referrals are considered general referrals, meaning that a single classifier makes a decision about examination potential. For these referrals, there are no specific criteria for identifying examination potential. Instead, referrals are classified using the facts and circumstances of each referral, which involves a classifier using his or her experience, and all available data on the referral, to determine whether potential noncompliance exists. Some referrals, such as those originating with whistleblowers or those pertaining to the Tax Equity and Fiscal Responsibility Act, have additional steps or criteria for classification. The classifier is responsible for documenting his or her decision in the Referral Database, as well as providing an explanation for the decision. According to the EO referrals manager, this includes mentioning any research conducted to corroborate the allegation and, for referrals classified as “no issue,” an explanation of the decision not to pursue. Referral committees. Referrals that deal with political activity allegations or what IRS has identified as sensitive allegations or organizations are also reviewed by a three person committee. The committees are composed of a rotating set of senior examination staff or managers who make the final decision about examination potential. According to the Internal Revenue Manual (IRM), committee members should rotate every 12 months on a staggered schedule to maintain continuity and expertise; volunteers are to be solicited in a memorandum from the Director of Examination for EO. EO has three types of committees to review referrals. 1. Political Activities Referral Committee. Reviews allegations of potentially noncompliant exempt organization political activities, including churches. The Political Activities Referral Committee reviewed 501 referrals in fiscal year 2014. 2. Church Committee. Reviews referrals concerning churches for allegations other than political activity. EO currently has two Church Committees, which handle the same types of cases. Church and High Profile Committees (see below) combined reviewed 43 referrals in fiscal year 2014. 3. High Profile Committee. Reviews referrals concerning exempt organizations that have attracted media attention, that have financial transactions with known or suspected terrorist organizations, or are referrals from elected officials. A committee may also review referrals involving other factors, identified by a classifier, which indicate that committee review is desirable for reasons of “fairness and integrity,” according to the Referrals Procedures document. Each committee member is responsible for reviewing the referral and providing a determination on examination potential, along with comments, into the Referral Database. Political activity referrals, and other referrals as requested, are sent for a compliance review to provide additional information to committee members to inform their decisions. Referral committee members are to use the reasonable belief standard as criteria for examination selection. The outcome for the referral is determined by a majority, i.e., at least two of the three committee members being in agreement. This outcome is automatically tallied in the database when members enter a decision. Committee decisions are considered final and cannot be overturned, although, as discussed below, church examinations must go through additional steps before initiation. Referrals prioritization. For each referral selected for potential examination in fiscal year 2014, a priority level was assigned that guides how quickly the referral is sent to the field for examination. For political activity referrals, the Political Activities Referral Committee determines whether a political activity referral is high priority or “other,” based on criteria in the Referrals Procedures document. All other referrals are assigned one of seven priority levels. For fiscal year 2014, priority levels were intended to feed into 16 workload priorities, as laid out in a These priorities memorandum from the Director of EO Examinations.guided examination management in deciding which cases to work. EO management has discontinued the prioritization memoranda, according to EO officials. Instead, starting with fiscal year 2015, referrals requiring collaboration with another IRS business division and those dealing with fraud are considered high priority, and therefore will be examined before other referrals. The Case Selection and Delivery unit assigns other referrals to the field based on grade level and location of examination staff, as described earlier in this report. Referrals that are assigned for examination may be dismissed if examination management or staff do not find examination potential, if the return is approaching the statute of limitations, or for other reasons. Referral outcomes. Most referrals are not selected for potential examination; referrals that are selected may not actually become examinations due to resource constraints, or other considerations, according to EO officials. Table 5 summarizes referrals processing during fiscal year 2014. Statutory requirements must be met before EO can initiate an examination on a church. Specifically, after determining that there is reasonable belief (based on facts and circumstances recorded in writing) that a church may not qualify for exemption, IRS must first issue a Notice of Inquiry to the church. An inquiry serves as a basis for determining whether the organization qualifies for exemption as a church, whether it is engaged in activities subject to tax, or whether an excess benefit transaction has occurred. If there is a reasonable belief that an inquiry is necessary—based on facts and circumstances of the case, including committee review if a referral is involved—then the information is sent to a designated official, who must be an “appropriate high-level Treasury official.” Currently, the designated official is the Director, Exempt Organizations, who must also get concurrence from the TE/GE Commissioner, according to EO officials. Under the statute, an “appropriate high-level Treasury official” must reasonably believe that an inquiry is necessary. According to the IRM, division counsel and an EO area manager must review the notice before it can be issued. If a church does not respond to the inquiry or cannot resolve IRS’s concerns, EO examination staff will prepare a Notice of Examination and a memorandum on why an examination is necessary, according to the IRM. Two levels of division counsel and the designated official must approve the notice. These statutory procedures are followed for employment tax issues, but do not apply to routine requests, such as solicitation of a delinquent employment tax return or information requested to resolve inconsistencies revealed in the matching program, among other things. We identified at least 72 referrals on potential political activity in churches in the Referral Database that were selected for examination by classifiers in fiscal year 2014 and 40 in fiscal 2013. Although selected by classifiers, these referrals would not necessarily become examinations, as they would need to follow the notice of inquiry procedures under IRC §7611. Also, between June 2013 and April 2014, EO suspended examinations on new political activity issues, while procedures related to political activity examinations were being reviewed and updated, according to EO officials. That hiatus likely also affected the number of church exams. official is the Director of EO, who acts in concurrence with the TE/GE Commissioner. The Financial Investigations Unit examines exempt organizations that are selected as a result of projects or queries, or have been identified with potential indicators of fraud. There were 121 Financial Investigations Unit examinations of exempt organizations initiated in fiscal year 2014. Classifiers and examination staff may refer returns to the Financial Investigations Unit if they believe indicators of fraud or other criteria are met, as listed in the IRM. form describing the issues and consult with an exempt organizations fraud specialist. A Fraud Technical Advisor must approve placing the return in fraud development status. If the Financial Investigations Unit finds indicators of fraud, the case will be referred to IRS’s Criminal Investigation division. The Financial Investigations Unit has also provided classifiers with supplemental criteria to help identify potential fraud. Classifiers may also refer a return to the Financial Investigations Unit. IRM Part 25, Chapter 1, Section 2 and IRM Part 4, Chapter 75, Section 21.8. The Criminal Investigation division is not part of TE/GE, but they do investigate cases involving exempt organizations. U.S. Attorney for prosecution, which may lead to an indictment and eventual trial. Referrals from U.S. Attorneys, local law enforcement, and other informants. Criminal Investigation staff conduct preliminary reviews of the referral, and then develop cases in the same manner as those originating from EO Examination. The criteria for moving forward with a prosecution resulting from a referral are based on the judicial district of the taxpayer. Each U.S. Attorney has different types of cases or minimum tax revenue loss they are willing to prosecute; there is no checklist of criteria, according to a senior IRS analyst working in Criminal Investigations. For example, one region may prosecute a tax revenue loss of $250,000, while another region may prosecute a loss of $100,000. We found that the design of certain examination selection controls aligned with the IRM or with standards for effective internal control (see sidebar and figure 4). We also found that the implementation of some of these controls (i.e. the steps used for examination selection) aligned with these standards. As such, these controls may serve as tools to help EO meet TE/GE’s mission of applying the tax law with integrity and fairness. However, we found that other controls were deficient in either their design or their implementation. These control deficiencies increase the risk that EO could fall short of TE/GE’s mission and select organizations for examination in an unfair manner—for example, based on an organization’s religious, educational, political, or other views. Effective internal control helps agencies adapt to shifting environments, evolving demands, and new priorities. As programs change and agencies strive to improve operational processes and implement new technology, management should continually evaluate its internal control system so that it is effective and updated when necessary. We found several examples of examination selection processes that met design and implementation internal control standards. Design of Internal Controls. Agency management designs control activities in response to its objectives and risks to achieve an effective internal control system. Control activities are the policies, procedures, techniques, and mechanisms that enforce management directives to achieve an agency’s objectives and address related risks. Internal control standards require control activities to be effective and efficient in accomplishing an agency’s control objective. As part of their control activities, agencies must ensure that internal controls are clearly documented. IRS requires primary sources of guidance with an IRS-wide or division-wide impact—such as policy documents, procedures, and guidelines—to be included in the IRM. This requirement is intended to ensure that IRS employees have the approved policy and guidance they need to carry out their responsibilities in administering the tax laws. In alignment with IRM requirements, EO maintains well-documented procedures for several examination selection processes. For example, EO management has developed IRM sections for referrals classification, claims processing, Financial Investigations Unit case processing, and examinations. During our focus groups, employees working closely with these IRM procedures generally reported they were useful. For examinations, there were IRM sections on several different types of examinations and steps in the process, including substitutes for return, related pickups, the Team Examination Program (used for large and complex organizations), and church examinations. In focus groups, we found that staff who use these (and other IRM sections) to conduct their work generally view them as valuable tools that help them administer the tax law (see text box). Selected EO employee focus group participants’ statements regarding IRM procedures: “I came off the streets and can read the IRM and understand it. Kudos to the IRM to the degree that this is my springboard to know what to do and how to do it.” “The IRM is excellent, it tells you everything.” “I go straight to the IRM. It's there to provide fair and consistent treatment.” To further align with IRM requirements, EO is working to draft and implement IRM procedures for testing and adopting 990 analytics queries.requiring approval to test queries not already listed in EO’s annual For example, EO will likely formalize its current practice of workplan, according to one of the draft IRM sections. This new draft IRM also describes certain development work to be conducted prior to implementing a query. EO plans for both IRMs to be implemented in the last quarter of this fiscal year, according to EO officials. Having written procedures will help ensure that EO is taking action to address internal control risks. EO’s publishing these procedures in the IRM increases transparency to the public. Implementation of Internal Controls. Internal control standards require that control design be adhered to in practice, known as control implementation. EO’s control design for examination selection includes requirements for various types of documentation of examination decisions and approvals. We found that multiple EO processes successfully implemented several types of such controls. For example, both EOCA procedures and referrals procedures require Case Chronology Records— records that track actions taken on a case—and we found that in practice, 100 percent of cases closed in fiscal year 2014 contained Case Chronology Records.implementation of controls for recording decisions and approvals. Internal control standards require that management and employees establish and maintain an organization-wide environment that sets a positive and supportive attitude toward internal control and conscientious management. An agency’s management plays a key role in providing leadership in this area, especially in setting guidance for proper behavior. The IRM sets standards of conduct for treating taxpayers fairly, stating that it is the duty of agency officials to determine the correct amount of tax owed with strict impartiality as between the government and taxpayer, and without favoritism or discrimination between taxpayers. It also says that agency representatives must adhere to the law and recognized standards of legal construction in making conclusions of fact or application of the law.who conduct examinations and other reviews consistently equated fairness with assessing organizations strictly by whether they comply with tax law and with treating similar types of taxpayers equally. EO employees’ relatively uniform understanding of fairness and the alignment of their understanding with the IRM is a significant step toward EO achieving a positive control environment, the foundation for all other control standards. We found in focus groups that EO employees Selected EO employee focus group participants’ statements regarding the meaning of fairness: “(You should) treat everyone alike, it doesn’t matter who filed the information, it’s what they filed. (You look to see) are they doing the right activity (as permitted by their tax exempt status), organizing properly, and capturing transactions accurately. It doesn’t matter the perspective of an organization. And you should support everything you do by regulations and code.” “You should treat each organization in a specific regulation section in the same manner. If it’s a 501(c)(4), if one organization is allowed to do something, then they all are—they are treated fairly and equally.” We found that while EO had established various procedures over its examination selection processes, there were several areas where EO’s controls were not well designed or implemented (see table 6). Taken as a whole, these control deficiencies increase the risk that EO could select organizations for examination in an unfair manner—for example, based on an organization’s religious, educational, political, or other views. Internal control standards require that controls, and an agency’s documentation of them, should be properly managed and maintained. As noted previously, IRS requires primary sources of guidance with an IRS- wide or organizational impact—such as policy documents, procedures, and guidelines—to be included in the IRM. This requirement is intended to ensure that IRS employees have the approved policy and guidance they need to carry out their responsibilities in administering the tax laws. Moreover, including primary guidance in the IRM fulfills certain legal requirements. For example, one way IRS complies with the Freedom of Information Act is by making most IRM guidance available online to the public. EO’s primary sources of guidance for compliance checks, compliance reviews, and EOCA classification are not included in the IRM, as required by the IRM. In 2008, EO officials drafted high-level descriptions of compliance checks and compliance reviews for the IRM, but these were never published. According to EO officials, staffing levels were insufficient to complete this work. Instead, EO has developed procedure documents for compliance checks, compliance reviews, and EOCA classification outside of the IRM. These documents provide instructions to staff about the required approvals, documentation, and other steps taken as part of each case selection process. The compliance check and compliance review processes also have job aid documents with information relevant to processing cases. By not complying with agency requirements and standards for internal control, the internal procedures for compliance checks, compliance reviews, and EOCA classification are not covered by the same standards as the IRM. For example, deviations from the IRM are only allowed with approval by executive management and with appropriate communication to employees, whereas these standards do not explicitly apply to other documents. Reliance on procedures that are outside of the IRM creates the risk that EO staff could deviate from procedures without executive management approval, which could result in unfair selection of organizations’ returns for examination. Excluding these procedures from the IRM also reduces transparency, since they would be available to the public if they were in the IRM. Internal control standards require that controls should be documented and properly managed and maintained. According to the IRM, IRS program managers are responsible for ensuring that IRM content is reviewed annually for accuracy and completeness and for analyzing issues that may necessitate changes. Likewise, supplemental guidance—such as job aids and desk guides—must be reviewed at least annually to ensure the content remains accurate.examination selection procedures, including the IRM, contained outdated and, in some cases, inaccurate material, such as the following examples: A provision in the IRM for manually selecting returns for the Team Examination Program, rather than using standardized criteria based on an organization’s income and assets, is not actually used, according to EO officials. Further, the IRM requires that a worksheet be completed for returns under consideration for the program. The worksheet includes a section on factors considered in the examination selection decision and a section for reviews and signatures. EO officials stated that the worksheet has not been used since 2012 and they are currently updating that section of the IRM. A section on closing dismissed examinations requires that a stamp with management signatures be placed on original, non-electronic, returns that are dismissed. However, this requirement is not implemented consistently—in our file review, we found very few of the paper files had this stamp—and given the increasing reliance on electronic copies of returns, this requirement may be becoming less relevant. Further, EO has another control in place (discussed in appendix II) which suffices to document management approval for dismissed returns. EO management said they will use the signatures on paper returns until all forms can be electronically signed. A section on requirements for opening an examination of a related return states that staff should document the manager’s approval to expand the examination, and a written statement of approval should be included in the examination file. EO officials told us that managers approve related pick-ups within the Reporting Compliance Case Management System (RCCMS), the database EO uses for tracking examinations, which has replaced the need for written documentation of approval within the file. We also found examples of outdated and inaccurate guidance in procedures documents. Specifically, in the September 2014 Referrals Procedures document we found the following: The procedures include criteria for sending a referral to a “High Priority Committee.” However, there is no High Priority Committee, according to EO officials. The EO officials described this as a “typo” and said that all mentions of the High Priority Committee should be replaced with “High Profile Committee.” This mistake was also in the previous year’s version of the document. The procedures include a requirement that subject matter experts periodically review referrals screened out by the classifier responsible for political activity referrals to confirm that exclusion from committee review is appropriate. EO officials stated that this requirement was originally formulated in 2012, but the reviews have not been conducted because of a subsequent decision to send all political activity referrals for committee review. We provided EO with these, and other, citations of outdated or inaccurate procedures. The EO Director acknowledged that there are outdated procedures and IRM sections for EO processes, and that many of these are the result of changes that have occurred over the past 18 months. EO officials said they were in the midst of an effort to update outdated and inaccurate guidance in the IRM and anticipated completing this process by the end of August 2015. Outdated and inaccurate procedures pose the risk that employees might follow incorrect procedures and therefore inconsistently administer tax laws. According to internal control standards, controls should generally be designed to assure that ongoing monitoring occurs in the course of normal operations. Monitoring involves management assessments of the design and operating effectiveness of internal control systems. It also includes ensuring that individual managers and supervisors know their responsibilities for internal control and the need to make control and control monitoring part of their regular operating processes. We found several deficiencies in closed examination and dismissed examination files, as well as the Referrals, EOCA, and EOCA Classification database files that did not follow procedures. For example, 4 out of 15 committee referrals (political activities, churches, and high profile committees) we reviewed that were selected for examination were missing a required description of the allegation for the committee. Also, an estimated 22 percent of examination returns that were dismissed in fiscal year 2014 did not have the required management signature. Taken as a whole, the deficiencies we found point to insufficient monitoring of case processing. See appendix II for details on the deficiencies. For each of the processes above, EO provided information on its monitoring activities to help ensure that procedures were followed and that internal controls were operating effectively (see examples in table 7). The EO referrals manager said that he randomly reviews one referral per month from each of the five classifiers, and documents the results. By having multiple reviewers assess certain referrals, the referrals committees also play a role in ensuring examination selection criteria are applied appropriately. The EOCA manager said that an analyst conducts quality reviews of a random sample of compliance checks, although these are not conducted regularly due to limited resources. A review of the EOCA database showed that out of 61 quality reviews performed for fiscal year 2014, 30 were performed in the approximately 3-month period between February 3 and May 9, 2014, and 24 were performed in the month-and-a-half period between June 30 and August 11, 2014. Seven reviews were performed in the remaining 7-and-a-half months of the fiscal year. For claims that are dismissed and approved within the Case Selection and Delivery group, the manager said she conducts a review for the required forms and of claim dollar amounts to ensure the correct claim amount is in RCCMS. EO has procedures that require mandatory reviews of certain examinations, such as those of church tax audits and revocations, to ensure technical and procedural accuracy. EO also has procedures for conducting quality reviews on a sample of cases to evaluate the managerial, technical, and procedural aspects of examination cases. While these reviews span many issues, many unrelated to examination selection processes, certain procedural issues related to examination selection may be assessed, such as whether prior year, subsequent year, or related returns were included in the examination where warranted. In spite of these monitoring activities, we found that EO employees were not always following documentation requirements for select EO examination selection procedures. As such, the current level of ongoing monitoring of examination and database files to ensure that selection decisions are documented and approved, to help ensure fairness, is inadequate. Additional monitoring may help management further evaluate EO’s internal control system and make changes to ensure staff are consistently following procedures. Internal control standards require that files be readily available for review and properly managed and maintained. The IRM also states that all records are required to be efficiently managed until final disposition. Yet IRS was not able to locate all of the examination files we requested for review in a timely way. Based on our estimates, 13 percent of the closed examination files could not be located in time for us to review them during the audit. Specifically, IRS could not locate the files until June 2015, whereas we submitted our original request for the files in early February 2015. Initially EO officials said the IRS unit that stores the files was unable to locate some of the files. More specifically, EO was told that the unit was unable to locate 3 of 13 of these files’ blocks—group of files with consecutive control numbers. This can mean that the block was not created or it was shelved in the wrong location. Other files showed as checked out by IRS staff and not returned to their proper file location. At the end of our audit, IRS located 10 of the missing files after undertaking a specific search for the files. However, the length of time it took to locate these files—nearly 4 months—shows that IRS’s process does not ensure that all files are readily available for review. According to EO officials, missing claims files are generally due to the difficulties of working with paper files. Missing case files can result in lost revenue to the federal government (if a case goes to court), create unnecessary taxpayer burden (if EO later needs to contact the taxpayer regarding material that would have been in the file), make cases unavailable for other units such as quality review groups or advisory groups, and hinder congressional oversight. Internal control standards require that internal controls and all transactions and significant events need to be clearly documented, and that all documentation and records should be properly managed and maintained. EO projects must adhere to certain requirements as they are developed, and these requirements can vary depending on the type of project. All project teams are required to develop project-specific procedures. For projects not associated with Form 990 analytics queries, project teams are required to obtain managerial approval of a proposal. The project proposal should contain, among other things, a description of the project objectives, the criteria to be used to select returns, and the signatures of EO executives and select functional directors. Other requirements depend on whether the cases selected for review are sent directly for field examination, or whether the cases are subject to an intermediate step like a compliance check or compliance review, or an examination through the office/correspondence examination program. The documents associated with a project—such as the proposal, procedures, selection criteria, and training materials—are maintained by the project team leader. We identified select areas where project documentation could be improved. Our review of 11 project files found that some project requirements were not always met: For one project in our sample, EO officials were unable to provide any documentation to support the project requirements we assessed in our project file review, including an approved project proposal. Of the seven projects requiring a project proposal—specifically, those projects not associated with 990 analytics queries—we found that none of the project proposals fully described the criteria to be used for the project, including the one project missing documentation. For example, three project proposals described that a judgmental (non- probability) sample would be used but the judgmental sample was not fully described. One of these projects’ proposals listed criteria for its first phase, conducted several years ago, but did not reflect selection criteria for a subsequent phase of the project. There was a memorandum in the file describing selection criteria for the more recent phase, but no documented management approval. Another project proposal described how the original sample of cases for which compliance reviews would be conducted was to be pulled, but not how information gathered during the compliance reviews would be used to select cases for examination. EO officials told us that when criteria are added or changed for a project, executives may be briefed and any briefing documents should be included in the project file. However, of the seven projects with incomplete criteria, we only found one file with an executive briefing document describing selection criteria. Two other projects had memoranda in the file describing selection criteria but lacked evidence of executive review or approval. EO officials told us about improvements they had made to project controls over the past few years. For example, EO officials told us that over the past year they increased efforts to reorganize how project file documentation is stored to help ensure consistency across projects. Likewise, projects using a compliance review or using the office/correspondence examination program had an added control developed toward the end of fiscal year 2013—a document capturing the date on which certain control activities were performed and the individual Yet these and other improvements do not responsible for each activity.address the need to fully describe selection criteria in a project proposal, or document executive briefings and approvals when criteria are added or changed. With project proposals lacking clear selection criteria, EO management risks the potential of projects selecting organizations for review in an unfair manner. According to internal control standards, procedures that enforce management directives help ensure that management directives are carried out. Yet EO does not have procedures describing the steps or requirements for triage teams—project team members responsible for deciding which organizations will be examined—when making or documenting examination selection decisions in the project files. Specifically, there is no procedure requiring that triage teams document the specific selection criteria they use to identify organizations for examination. While broad selection criteria are generally described in the project proposal, the triage team may apply additional criteria to filter out cases and align with available resources for working examinations. EO officials told us that these selection criteria decisions would generally be documented in meeting minutes or elsewhere in the project files. Of the two projects using triage teams with closed examinations in fiscal year 2014, one project had dozens of files in the meeting minutes project folder, making it difficult to assess documentation of selection criteria, while the other project was missing documentation altogether. Triage teams are composed of subject matter experts, including project leads, analysts, and managers. They review the results of compliance checks or compliance reviews and, using their professional knowledge and expertise, apply project-specific criteria to make selection decisions. year 2014, the same project as mentioned above was missing project file documentation altogether. The other project had several spreadsheets that tracked examination selection decisions, but it was not always clear which organizations had been selected. More than 2,000 of the compliance checks and compliance reviews conducted in fiscal year 2014 were, or will be, subject to triage team review. Having procedures to ensure triage teams documenting selection criteria and decisions clearly and consistently will help EO ensure that management directives are consistently followed and that examination selection decisions are made fairly. Internal control standards also state that program managers need operational data to determine whether they are meeting their agencies’ strategic and annual performance plans and meeting their goals for accountability for effective and efficient use of resources. We did not review the project files for 2 of the 13 projects in our sample because EO officials told us that these projects were not active in fiscal year 2014. Although no examinations should have been conducted as part of these older projects, we found 156 closed examinations attributed to them in the Returns Inventory and Classification System (RICS) database. For one project, EO officials said that the majority of the project work was conducted between 2006 and 2008 and that the project was concluded by 2009. EO officials said it was likely that EO examination staff miscoded these examinations, and that this is more likely to happen when an examination is associated with a newer project of a similar name. EO uses data on closed examinations, by project, to track performance against workplan goals. Without adequate controls to ensure that examinations are appropriately coded—for example, by ensuring that project codes for closed projects without anticipated future examinations are not utilized by staff—EO management may not have the information it needs to ensure the efficient and effective use of resources. The IRM states that all data elements in IRS databases should be defined and documented as part of the database design process. However, EO does not have complete and up-to-date data dictionaries—documents that define each data element in an information system—for the systems it uses to track and document examination selection decisions. EO does not have a data dictionary for the Reporting Compliance Case Management System (RCCMS)—which EO uses for tracking examinations and will use for additional selection purposes in the future— nor for the EOCA or EOCA classification databases. EO’s data dictionary for the referrals database is incomplete, as it defines fewer than half of the data elements in the database. In addition, EO has an outdated data dictionary for RICS, which it uses to identify populations of cases for review. For all of these systems, EO describes selected data elements and how they are to be used in procedure, training, or other documents. For example, EOCA procedure and job aid documents define several codes used in the EOCA database and define common data elements from the database. EO officials cited various reasons for not having data dictionaries or for having outdated dictionaries. For RCCMS, EO officials said they use as their data dictionary a document which lists the data elements within each data table. The document shows the relationships between data elements, but does not contain a description of each data element. Officials overseeing the EOCA and EOCA classification databases said they did not have adequate resources to develop this type of documentation, and also that data elements are already defined in other documents. The manager overseeing the referrals database said that the database predates his tenure, but fields have been added over the years, particularly in response to audits conducted by the Treasury Inspector General of Tax Administration. Finally, EO officials said the RICS data dictionary is outdated due to lack of staffing and because documents listing transcribed lines from Form 990 filings, which are used by staff to develop examination selection queries, were kept up to date instead. Without complete and up-to-date data dictionaries, data element definitions are not available in a single document for individuals newly using the system, including EO employees and system developers. Also, since data dictionaries assist users and developers with understanding these databases, not having an adequate and up-to-date dictionary may result in an increased risk of data elements not being used accurately. To illustrate, the referrals database has certain obsolete data elements used to document decisions when procedures were different, yet we found that individuals were sometimes still entering information into these fields. The E-Government Act of 2002 requires agencies to conduct a Privacy Impact Assessment before developing or procuring information technology systems or projects that collect, maintain, or disseminate information about members of the public or initiating a new electronic collection of information for 10 or more persons. EO has not conducted Privacy Impact Assessments for three of its databases used in examination selection processes—specifically, the referrals, EOCA, and EOCA classification databases. These systems house thousands of records describing reviews conducted on exempt organizations, including organizations’ names, Employer Identification Numbers, and other taxpayer information. We were initially told that the Tax Exempt and Government Entities (TE/GE) division does not consider the databases to be systems, and therefore they were not covered by requirements in the E-Government Act. However, in responding to our subsequent questions on the subject, IRS Chief Counsel stated that these databases likely fall within the purview of the Act. Accordingly, in early April, EO officials began to evaluate whether the databases require a Privacy Impact Assessment with IRS’s Office of Privacy, Governmental Liaison, and Disclosure (henceforth, the Privacy Office). This evaluation involved TE/GE responding to a questionnaire and the Privacy Office assessing these responses to determine whether Privacy Impact Assessments are necessary to comply with E-Government Act requirements. As a result of the assessment, the Privacy Office determined that Privacy Impact Assessments were required for each of these three systems. According to a Privacy Office official, creating a final approved Privacy Impact Assessment can take 1 to 3 months. Internal control standards require that key duties and responsibilities be divided or segregated among different people to reduce the risk of error and to achieve organizational goals. Referrals classification divides work among its five classifiers based on expertise for a particular type of referral. One classifier is responsible for each of the following area(s): 1. Fraud and terrorism referrals, and referrals that request collaboration with another IRS division. 2. Political activity referrals. 3. Tax Equity and Fiscal Responsibility Act and state referrals. 4. Employee status, whistleblower, and international referrals. 5. High profile and church committee referrals. In addition, each classifier, except for the individual responsible for political activity referrals, also reviews general referrals, which are referrals that do not fit into these categories. The classifiers review the referrals they are responsible for and make classification determinations. Each classifier has several years of experience and some have received specialized training to work with the types of referrals under his or her specialty. Aside from general referrals, classifiers are not cross-trained on referral specialties, according to the EO referrals manager. The specialization of the classifiers allows for in-depth knowledge of complex issues and for the opportunity to apply experience; however, internal control risks accompany this approach. First, for political activity, church, and high profile referrals, the classifier appears to serve as an initial gatekeeper for determining whether a referral is reviewed by a committee. Although committee reviews are intended as a safeguard against unfairness in the examination selection process, referrals that do not make it past the classifier do not undergo committee review. For example, according to the EO referrals manager, all political activity referrals are supposed to go to the Political Activities Referral Committee to ensure that they receive the benefits of committee review. However, the political activity referrals classifier exercises some judgment in determining which referrals are categorized as political activity referrals. Further, in our review of the Referrals Database, we found that about 5 percent of political activity referrals classified in fiscal year 2014 were not reviewed by the committee. The classifier for the high profile and church referrals has more discretion in deciding which referrals go to the committee. In our database review, we found that about 91 percent of these referrals were not reviewed by the committee. According to the EO referrals manager, this is often because they did not contain enough information or did not deal with tax issues (known as “no issue” referrals). These numbers highlight the extent to which classifiers make decisions outside of the committee review process. According to internal control standards, key duties and responsibilities should be divided or segregated among different people to reduce the risk of error. Spreading classification responsibilities for sensitive referrals to more than one classifier could help decrease the potential influence of any one classifier. Even if other safeguards are in place, having the same individual initially classify all political activity or all high profile and church referrals creates the potential for unfairness, particularly for referrals that the classifier labels as “no issue” and therefore are not sent for committee review. The lack of cross-training among the classifiers also creates concerns about succession planning. For example, three of the five classifiers are eligible for retirement, according to the EO referrals manager. Internal control standards require that management should ensure that skill needs are continually assessed and that the organization is able to obtain a workforce that has the required skills that match those necessary to achieve organizational goals. As a part of its human capital planning, management should also consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. The EO referrals manager agreed that they should have a succession plan, but said that they are working with a “bare bones” staff and do not have the resources to take time away from current classifiers’ duties for training. TE/GE executives agreed that cross-training classifiers is ideal, but also stated that it is currently not feasible given existing resources. However, without cross-training or provisions for succession planning, EO is risking its ability to process referrals upon the departure of one of its classifiers. It is inevitable that classifiers will eventually need to be replaced. By completing some cross- training ahead of those departures, referrals classification could benefit from overlapping specialties and spreading responsibility for classifying sensitive referrals to reduce the potential for error and potential unfairness. According to internal control standards, procedures that enforce management directives, such as IRM requirements on the rotation of staff, help ensure that management directives are carried out. Management should also design and implement internal controls based on the related costs and benefits. The IRM requires that committee members rotate every 12 months, on a staggered schedule. The members of the referral review committees are not rotating every 12 months. Based on start dates for committee members provided by EO and our database review, we found 87 percent of current committee members were serving for more than a 12 month period, as of April 30, 2015. Specifically, current committee members who exceeded their 12-month tenures have been serving on the committees for an average of 34 months (see table 8). In addition, although not reflected in this table, some committee members may have served on another committee prior to their current term, according to the EO referrals manager. Our database review showed that some committee members have served on committees since 2009. Rotating staff may help ensure that a variety of staff serve on the committees, which serve as a safeguard in the classification of political activity and sensitive referrals. If the required committee rotation serves as an internal control to address risks in the referrals examination selection process, then by not following its procedures, EO has the potential to fall short of its goal of fairness. Committee member volunteers are to be senior EO technical employees, and should be solicited in an annual memorandum from the Director of EO Examinations, according to the IRM. EO did not have records of an issued memorandum soliciting volunteers, although all of the current members are volunteers, according to an EO official. The EO referrals manager told us that it is difficult to get volunteers because of the potentially high volume of work, particularly for political activity referrals, and the difficulty providing prospective committee members with an estimate of the time commitment required. However, by not utilizing all avenues for soliciting volunteers—such as the memorandum from the Director of EO Examinations—EO is not reaching the pool of potential volunteers and maximizing opportunities to rotate committee members. Yet management should also design and implement internal controls based on the related costs and benefits. The current committee members have likely developed an expertise in assessing these cases, and training new members may require additional resources. Although a provision for rotations is consistent with internal controls, it is possible that a rotation of more than 12 months may suffice. If EO believes that to be the case, it could revise the IRM accordingly. The Exempt Organizations unit (EO) is faced with the challenging task of overseeing the diverse population of exempt organizations and enforcing their compliance with the tax laws. EO’s reliance on a variety of sources and processes to select organizations’ returns for examination underscores the importance of it having a robust internal control system to ensure selection fairness and integrity, in accordance with the Tax Exempt and Government Entities (TE/GE) division’s mission. EO has some controls in place that are consistent with internal control standards, and has implemented some of these controls successfully. However, there are several areas where EO’s control system could be strengthened, as noted by the control deficiencies identified in this report. Many of these deficiencies pose a risk that could lead to returns being selected, or not selected, for examination based on criteria or practices that fall short of TE/GE’s mission of ensuring fairness and integrity. For example, internal controls monitoring is one way to help reduce risk, but without consistent monitoring, there is the possibility returns could be selected for unfair reasons. Control design improvements. To reduce the risk of returns being selected unfairly, EO could make several improvements to its examination selection control design. Formalizing additional procedures in the Internal Revenue Manual (IRM) would ensure coverage using standards that dictate when deviations from selection procedures are appropriate, and would increase transparency. Ensuring that all procedures are current and accurate would reduce the risk of employees following incorrect procedures and administering tax laws inconsistently. Enhancing both the monitoring of database files used to document examination selection decisions and monitoring the content of examination files would increase EO management’s ability to address and rectify problems such as missing signatures, errors in applying criteria, and inappropriate justifications for case selection or dismissals—each of which might result in case selection decisions inconsistent with TE/GE’s mission. Improving tracking of closed examination files would have multiple benefits, including facilitating congressional oversight. More consistent documentation and approval of criteria in EO’s project files would reduce the risk that inappropriate selection criteria could be developed. Finally, more accurate coding of examinations would reduce the risk of ineffective or inefficient resource allocation decisions for compliance activities. Control implementation improvements. EO could also make several improvements to control implementation that would reduce the risk of returns being selected unfairly. Maintaining up-to-date EO database documentation of data element definitions used to track case selection would reduce the risk of inaccurate use of data elements. Additionally, EO has the opportunity to provide cross-training for referrals classifiers and to quickly benefit from their resulting skills enhancements. This would allow shared responsibility for reviewing political activity and sensitive referrals, and also could reduce the potential for unfairness. Providing training for classifiers currently onboard, rather than waiting until a classifier departs and it becomes a necessity, would help preempt a significant void of knowledge and an increased backlog of work. Finally, rotating the staff who serve on referral review committees would help ensure that a variety of staff serve on committees, providing a safeguard for maintaining fairness and objectivity in the classification of political activity and sensitive referrals. To better ensure the Exempt Organization (EO) unit’s adherence to the Tax Exempt and Government Entities (TE/GE) division’s mission of “applying the tax law with integrity and fairness to all” in selecting exempt organizations to review or examine, we recommend that the Commissioner of Internal Revenue direct EO to take the following nine actions: 1. Complete the development and formally issue the Internal Revenue (IRM) sections on compliance checks and compliance reviews, and develop and formally issue an IRM section on Exempt Organization Compliance Area (EOCA) classification. 2. Develop, document, and implement a process to ensure that Internal Revenue Manual (IRM) sections and other procedures are reviewed and updated annually, and that updates reflect current practice, as required. 3. Develop, document, and implement additional monitoring procedures in order to ensure case selection controls, including ensuring that procedures for obtaining required signatures and documenting explanations for selection decisions, are being followed. 4. Develop, document, and implement procedures to ensure that all criteria or methods used in projects to select returns for examination are consistently documented and approved, including procedures related to documenting changes that occur during the course of a project, or new phases of a project. 5. Develop, document, and implement procedures for examination selection done by triage teams, including a process to consistently document selection criteria and triage team examination selection decisions. 6. Determine what additional controls may be needed to ensure examinations related to projects are properly coded. 7. For the databases EO uses during examination selection, develop complete and up to date data dictionaries to define data elements used in the databases. 8. Provide cross-training for referrals classifiers, prioritizing training for classifiers who work with political activity, high profile, and church referrals; and develop, document, and implement a system to ensure that those referrals are not always reviewed by the same classifier. 9. Ensure that referral committee members rotate every 12 months by soliciting volunteers. If EO does not believe that 12 months is an appropriate rotation length, then the Internal Revenue Manual (IRM) should be revised to require an alternative rotation schedule. In addition, we recommend that the Commissioner of Internal Revenue take the following action: 1. Determine what additional controls may be needed to ensure that all closed examination files are tracked and maintained accurately. We provided a draft of this report to IRS for review and comment. In its written comments, reproduced in appendix III, IRS generally agreed with our findings and recommendations. In its comments, IRS described the steps it plans to take to implement the recommendations. IRS also provided technical comments, which we incorporated where appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will plan to send copies of this report to the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. The report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact us at (202) 512-9110 or mctiguej@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix IV. This report (1) describes the processes for selecting tax-exempt organizations for examination, and (2) assesses the adequacy of the controls (including procedures) for selecting examination cases that the Exempt Organizations (EO) unit uses to achieve the Tax Exempt and Government Entities (TE/GE) division’s stated mission of “applying the tax law with integrity and fairness to all.” For the first objective, we reviewed Internal Revenue Service (IRS) documents that describe the processes and criteria for selecting exempt organization returns for examination. These documents included sections of the Internal Revenue Manual (IRM), procedures documents, training documents, worksheets for reviewing files, and summaries of selection processes prepared by EO officials. We also interviewed IRS officials responsible for overseeing examination selection. In addition, we obtained data from the following IRS databases: Returns Inventory and Classification System (RICS), Referral database, Exempt Organizations Compliance Area (EOCA) database, and EOCA Classification database. The databases contain information on initiated and closed examinations, classification of referrals, and compliance reviews and compliance checks. Based on our testing of the data and review of documentation and interviews, we determined that these data were reliable for the purposes of this report. IRM, Part 1, Chapter 1, Section 23. In two focus groups of selected EO staff who conduct examinations.total, our groups involved 41 participants with an average of about 17 years of IRS experience, with a range from 8 months to 37 years of experience. The focus groups were held at IRS offices in Ogden, Utah; Dallas; and Atlanta. We asked questions on internal control related topics, such as the clarity of EO procedures and the adequacy of training to apply these procedures. We used NVivo qualitative data analysis software to conduct a content analysis of themes from the focus groups. To assess how well EO implemented its procedures and applied examination selection criteria, we used IRM sections, EO procedures documents, and other documents as criteria. We reviewed the populations of cases or referrals closed during fiscal year 2014 in the Referral, EOCA, and EOCA Classification databases.populations, we looked for completeness of required fields used in conducting research or selecting returns for examination. For our population-level analyses, we considered any control with a non- adherence rate greater than 5 percent to be ineffective. We also selected random probability samples of the database files to review required text, such as justifications for selecting or not selecting a return for Within the examination or review. Finally, we selected random probability samples of dismissed examination files, and closed examination files from processes described earlier in this report.samples we reviewed. The Exempt Organizations (EO) unit has procedures in place to document multiple types of examination selection decisions and approvals. Internal control standards state that control activities such as these procedures help ensure that management directives are carried out. Table 10 summarizes our findings on the effectiveness of implementation of the procedures, using a tolerable error rate of 5 percent (meaning that up to 5 percent of cases could fail to adhere to a procedure, and the procedure would still be considered effective). We found that some procedures were implemented successfully, and some were not. For some procedures, we were unable to obtain sufficient information to make a conclusive determination about whether implementation was successful. See appendix I for more details on our sampling methodology. In addition to the contact named above, Jeff Arkin, Assistant Director; Carl Barden; Jehan Chase; Deirdre Duffy; Ted Hu; Laurie C. King; Meredith Moles; Jessica Nierenberg; Neil Pinney; Amy Radovich; Robert Robinson; Cynthia Saunders; Albert Sim; Lindsay Swenson; and James R. White made major contributions to this report. The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s website (http://www.gao.gov). Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to http://www.gao.gov and select “E-mail Updates.” The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s website, http://www.gao.gov/ordering.htm. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO on Facebook, Flickr, Twitter, and YouTube. Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts Visit GAO on the web at www.gao.gov. . This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
IRS examines tax-exempt organizations to enforce their compliance with the tax code. Examinations can result in assessment of taxes or revocation of tax-exempt status, among other things. GAO was asked to review IRS's criteria and processes for selecting exempt organizations for examination. This report (1) describes these processes and (2) assesses the adequacy of examination selection controls. GAO reviewed IRS criteria, processes, and controls for selecting organizations for examination and spoke with IRS officials; assessed whether IRS controls followed Standards for Internal Control in the Federal Government ; reviewed random probability samples from two populations of examination files; and conducted tests on populations and random probability samples from three databases used in EO examination selection to determine the adequacy of EO's control implementation (for files closed in fiscal year 2014). GAO also conducted eight focus groups on internal controls topics with EO staff who conduct research or make examination selection decisions. The Exempt Organizations (EO) unit within the Tax Exempt and Government Entities (TE/GE) division at the Internal Revenue Service (IRS) reviews organizations' applications for tax-exempt status to determine whether to grant status and oversees existing exempt organizations' compliance with the tax code. To identify exempt organizations for possible examination, EO uses a variety of information sources: for example, EO receives referrals of exempt organization noncompliance from third parties, such as the public, and other parts of IRS. EO uses various controls intended to help it select exempt organizations for examination, in an effort to adhere to TE/GE's mission of “applying the tax law with integrity and fairness to all.” For example, EO maintains well-documented procedures for several examination selection processes in the Internal Revenue Manual (IRM), IRS's primary, official source of instructions to staff; staff can deviate from procedures that are included in the IRM only with executive management approval. In focus groups, EO staff generally told GAO that these procedures were valuable tools to help them administer the tax law. However, there are several areas where EO's controls were not well designed or implemented. The control deficiencies GAO found increase the risk that EO could select organizations for examination in an unfair manner—for example, based on an organization's religious, educational, political, or other views. Examples of internal control deficiencies GAO found include the following: Staff could deviate from procedures for some selection processes without executive management approval. GAO found that procedures for some processes—such as applying selection criteria to organizations under consideration for review—are not included in the IRM, as required by IRS policy. As a result, staff are not required to obtain executive management approval to deviate from these procedures. This increases the risk of unfair selection of organizations' returns for examination. EO management does not consistently monitor selection decisions. GAO found that IRS does not consistently monitor examinations and database files to ensure that selection decisions are documented and approved, to help ensure fairness. GAO's review of examination files found that approval of some selection decisions was not documented, as required by EO procedures. For example, GAO's analysis of a sample of files suggests that an estimated 12 to 34 percent of cases where staff initially selected an organization for examination, but ultimately decided not to perform the examination, were missing the indication of management approval of the final decision, as required in the IRM. Continuous monitoring is an element of internal control; EO management has not been conducting sufficient monitoring to ensure that required approvals were taking place. GAO is recommending that IRS take 10 actions to improve selection control design and implementation, such as ensuring that all selection procedures are included in the IRM and thus subject to executive management approval, and developing additional examination selection monitoring procedures. IRS generally agreed with the recommendations.
Job Corps was established as a national employment and training program in 1964 to mitigate employment barriers faced by severely disadvantaged youths. Job Corps enrolls youths aged 16 to 24 who are economically disadvantaged, in need of additional education or training, and living in disorienting conditions such as a disruptive homelife. Students may enroll in training programs throughout the year and progress at their own pace. Job Corps provides participants with a wide range of services, including basic education, vocational skills training, social skill instruction, counseling, health care, room and board, and recreation. The program offers vocational skills training in areas such as business occupations, automotive repair, construction trades, and health occupations. Participation in Job Corps can lead to placement in a job or enrollment in further training or education. It can also lead to educational achievements such as attaining a high school diploma and skills in reading or mathematics. Job Corps is unique in that, for the most part, it is residential. About 90 percent of the youths enrolled each year live at Job Corps centers and are provided services 24 hours a day, 7 days a week. The premise for boarding participants is that most come from a disruptive environment and, therefore, can benefit from receiving education and training in a different setting in which a variety of support services are available around the clock. Job Corps operates in a very structured and disciplined environment. For example, established daily routines must be followed, as must specific rules and regulations governing such areas as acceptable dress and behavior. Furthermore, Job Corps participants must have permission to leave the Job Corps center grounds, and participants “earn” home leave, which must be approved before being taken and can be denied for a number of reasons such as failure to follow a center’s rules of conduct. Job Corps typically employs residential staff to oversee dormitory living and security staff for the safety and well-being of its participants. The program recently implemented a “zero tolerance” policy for violence and drugs. This policy includes a “one-strike-and-you’re-out” provision for the most serious violent or criminal offenses as well as for drug violations. Job Corps currently operates 109 centers throughout mainland United States, Alaska and Hawaii, the District of Columbia, and Puerto Rico. Most states have at least one center, and several states have four or more centers. Job Corps’ nine regional directors are responsible for the day-to-day administration of the Job Corps program at the centers within their geographic boundaries. Private corporations and nonprofit organizations, selected through competitive procurement, operate the majority of the centers. However, the departments of Agriculture and Interior directly operate 28 centers, called civilian conservation centers, under interagency agreements. The regional directors are also responsible for overseeing the recruitment of youths for program participation and the placement of participants after they leave Job Corps. Recruitment, referred to as outreach and admissions by program managers, and placement services are provided by private contractors, the centers, or state employment service agencies under contract with the regional offices. During program year 1995, Job Corps spent about $60 million on outreach and admissions as well as placement contracts. This included amounts paid contractors solely for outreach and admissions and placement services. In addition, a portion of the funding for some Job Corps center operation contracts was specifically designated for outreach and admissions and placement services. Job Corps contractors are expected to meet certain levels of achievement in order to continue to participate in the program and receive program funding. A performance standard has been established for outreach and admissions contractors with respect to “quotas” of male and female youths to be enrolled (as specified in the contract), and a second standard relates to the proportion of participants who are to remain in the program for more than 30 days (90 percent). A third standard relates to the percentage of participants who are eventually placed following termination from the program (70 percent). Similarly, placement contractors are required to meet established standards related to the percentage of participants placed in jobs, the military, schools, or other training programs (70 percent). Additional standards are applied to participants who are placed in jobs. These standards relate to the percentage obtaining full-time jobs (70 percent) and jobs directly related to the vocational training received (42 percent). A fourth placement standard relates to the average wage received at placement. Individuals enroll in Job Corps by submitting applications through outreach and admissions contractors. The length of time students stay in Job Corps can vary substantially—from 1 day to 2 years. In program year 1995, about 15 percent of the enrollees left Job Corps within 30 days of entering the program and more than one-fourth left within 60 days. On the average, however, students spend about 7 months in the program. Students leave Job Corps for a variety of reasons, including successful completion of the program objectives, voluntary resignation, disciplinary termination, and being absent without leave (AWOL) for 10 consecutive training days. With a few exceptions, participants terminating from Job Corps are assigned to a placement contractor for assistance in finding a job or enrolling in other education or training programs. Placement contractors are to give priority to finding full-time, training-related jobs for participants. We found that Job Corps’ policy guidance on two of its eligibility criteria was ambiguous and incomplete. As a result, the program’s eligibility process was not following all the requirements of the law or program regulations. The law specifies program eligibility requirements, including age, economic status, educational needs, medical condition, and behavioral condition—all defined in the legislation, implementing regulations, or Labor policy guidance. Another legislative requirement—living in an environment characterized by disorienting conditions—has not been clearly defined in the statute, regulations, or Labor’s guidance. Further, Labor has not provided adequate guidance regarding the requirement that participants have the capability and aspirations to complete and secure the full benefits of Job Corps. Contractors are required to follow Labor’s Policy and Requirements Handbook, which sets out 11 eligibility criteria for the program that all participants must satisfy: age, economically disadvantaged, requires additional education or training, environment, health history, behavioral adjustment history, capability and aspirations to participate, legal U.S. resident, child care, parental consent, and Selective Service registration (see app. II). The first seven are specified in the law. The policy handbook generally provides guidance on what is needed to meet most of these criteria. For example, to be eligible under the education or training criterion, an applicant must be a dropout or in need of additional education, training, or related support services in order to hold meaningful employment, participate in regular school work, qualify for other training, or satisfy armed forces requirements. However, guidance on two of the criteria (environment and capability and aspirations) is vague. One of Job Corps’ eligibility criteria specified in the law for participation in the program relates to environment: A participant must come from “an environment so characterized by cultural deprivation, a disruptive homelife, or other disorienting conditions as to substantially impair prospects for successful participation in other programs providing needed training, education, or assistance.” Program regulations go on to explain that the disorienting condition must be one that would impair the applicant’s chance of success in a nonresidential program rather than a residential Job Corps program. Job Corps legislation, Labor’s program regulations, and Job Corps’ policy handbook list environmental factors to be considered when assessing eligibility, but these sources of program guidance are not entirely consistent nor do they contain adequate definitions (see table 1). With the exception of the regulatory definition of disruptive homelife, program guidance does not define the factors that make up the environmental criterion. In the absence of specific definitions of the environmental criterion, admissions counselors applied their own interpretations. As shown in table 1, Labor includes “limited job opportunities” in its policy handbook as a disorienting condition that fulfills the environmental eligibility requirement. However, none of the sources of program guidance specifically defines this factor or gives any direction to assessment counselors to help them interpret it, nor do they explain how limited job opportunities affect the chance of success in a residential program compared to a nonresidential one. In prior Job Corps regulations, Labor included among “disruptive conditions” that could impair an applicant’s prospect to participate fully in nonresidential training “a neighborhood or community characterized by high crime rates, high unemployment rates, high school dropout rates, and similar handicaps.” Unlike the present regulations, the prior version made clear that applicants might be subject to more than one disruptive factor and that several factors in combination might satisfy this impairment criterion. Labor’s present guidance does not explain how “limited job opportunities” by themselves can satisfy this criterion. Nonetheless, limited job opportunities was the factor cited as fulfilling the environmental eligibility requirement for 92 percent of the 68,000 Job Corps enrollees in program year 1995. Because admissions counselors generally indicate only one environmental factor on the Job Corps application form, we have no way of knowing how many of these participants would have met the environmental criterion had limited job opportunities not been used to fulfill the requirement. Further, the admissions counselors we interviewed had varying interpretations of limited job opportunity. Some thought that it referred to the applicants’ lack of job skills or lack of education, whereas others thought that it referred to the economic condition of the geographic areas in which applicants resided or their being too young or lacking transportation. Cultural deprivation, another eligibility factor that could fulfill the environmental criterion, was not clearly defined—in fact, it is not even listed in Labor’s policy handbook—and was also interpreted differently by various admissions counselors. One contractor referred to persons who had never gone to a museum or the beach; another thought it applied to a situation such as raising a minority child in a nonminority family; a third referred to living in a housing project. Most admissions counselors we interviewed admitted that they had no idea what this term meant. Finally, Labor’s policy handbook restricts what can be considered under the environmental criterion, stating that to be eligible an applicant must be living in an environment characterized by disruptive homelife; unsafe, overcrowded dwelling; limited job opportunities; or disruptive community; high crime rates. However, the handbook excludes cultural deprivation—specified in the statute and Labor’s own regulations—from permitted environmental factors. The Job Corps law states that to enroll in Job Corps, an applicant must, after careful screening, have the present capability and aspirations to complete and secure the full benefit of the program. However, in determining whether applicants meet this requirement, Labor relied primarily on an evaluation form that assesses behavior that would be expected of any and all applicants. Without more detailed guidance on the use of this criterion, the program may not always be serving those who are most likely to benefit from it. In previous work, we found that ensuring that project participants are committed to training and getting a job is a key feature of successful employment training projects. The law does not define “capabilities and aspirations” but leaves to Labor the tasks of defining this term and providing guidance on how it is to be implemented. Labor has developed the “Capability and Aspirations Assessment Tool,” which admissions counselors must complete for each applicant (see app. III). This “tool” formulates four categories of factors—commitment, attitude, capability, and compatibility of applicant and program goals—that are used to assess capability and aspirations and to demonstrate suitability for the program. Factors under commitment include meeting scheduled appointments on time, providing requested documents such as birth certificates, and reacting favorably to program requirements such as following center rules and living away from home. Attitude includes willingly responding to questions and behaving respectfully during the interview. Capability involves obtaining documentation that supports an applicant’s ability to benefit from the program such as school, court, or medical records or a letter from a former employer. Compatibility of applicant and program goals relates to the admissions counselor’s opinion that an applicant’s expressed goals—for example, for job placement or vocational training—can be realistically achieved through Job Corps. The factors specified in Labor’s assessment tool include characteristics that if not displayed would be an appropriate basis for rejecting an application. However, the possession of these characteristics does not necessarily demonstrate that an applicant has the ability and motivation to benefit from Job Corps. Job Corps outreach and admissions contractors and regional staff whom we spoke with pointed out shortcomings in the current approach to assessing applicants’ capability and aspirations. Staff in one of Labor’s regional offices stated that admissions counselors have asked for additional guidance in making better decisions on capability and aspirations. An admissions contractor with statewide recruiting responsibility in one state said that there is a need for a valid assessment tool for this criterion because the current tool is inadequate. Another contractor stated that it filled out Labor’s assessment tool because it is a program requirement but did not use it in assessing the suitability of applicants. One of Labor’s regional offices has started to develop a more meaningful tool. A substantial number of Job Corps participants leave the program within a short time after enrollment—about one-fourth of program year 1995 participants left within 2 months. Therefore, we believed that it would be useful to identify ways contractors could target recruitment efforts and the selection of applicants to the eligible youths who are more likely to stay in the program and, thus, more likely to benefit from it. To determine the factors that might be related to program retention, we visited a number of outreach and admissions contractors to examine their practices in assessing and screening applicants for the program. We also analyzed the characteristics of the more than 68,000 program year 1995 participants to determine the characteristics that were associated with remaining in Job Corps for at least 60 days. In our visits, we identified several procedures that distinguished outreach and admissions contractors with higher retention rates from other outreach and admissions contractors. In general, these procedures were aimed at identifying applicants with the commitment and motivation to remain in and benefit from the program. Our statistical analysis provides some information about characteristics significantly related to the likelihood of remaining in the program for at least 60 days that Labor could use to design outreach efforts, establish priorities among applicants, or improve the retention rate for those who might otherwise leave the program early. Of the 11 outreach and admissions contractors that we visited, those with higher retention rates (10 percent or fewer of their enrollees dropping out within the first 30 days) tended to have better procedures for identifying applicants with the commitment and motivation to remain in and benefit from the program. That is, these contractors emphasized making sure that applicants met the programs’ statutory eligibility criterion of having the capability and aspirations to complete and secure the full benefit of the program. These more-successful contractors’ procedures included “commitment checks” and preenrollment tours and briefings, which gave applicants a more realistic basis for deciding whether they wanted to enroll. The emphasis in these programs was consistent with the finding we reported in a May 1996 report on successful training programs—that a key job-training strategy shared by successful programs was a focus on ensuring that participants are committed to training and getting a job. It was also consistent with the opinions expressed by several regional directors we interviewed. The “commitment checks” contractors’ used were designed to test Job Corps applicants’ initiative. For example, several contractors required individuals interested in Job Corps to set up application appointments. Admissions counselors at four contractors also mentioned that they required applicants to arrive for their meetings dressed in proper attire; otherwise, they had to schedule another appointment. In addition, three admissions counselors required applicants to submit written statements of why they wanted to participate in the program and what they hoped to accomplish. Several admissions counselors required applicants to call weekly between the date of application and the enrollment date to determine the status of their application and to demonstrate their continued interest in the program. Finally, one contractor also used a nine-point checklist of documents that all interested persons had to acquire before they set up their application appointment. Some outreach and admissions contractors considered preenrollment tours and briefings to be extremely useful, although they were not practical in every situation. They provided applicants with a firsthand opportunity to obtain a thorough understanding of Job Corps rules and requirements, observe the living conditions, erase false expectations, and determine whether they were suited for regimented life. In some instances, these preenrollment briefings were given prior to application while others took place afterward. For example, one contractor required that all interested individuals attend a prearranged tour and briefing. After taking the tour, attending the briefing, and participating in a question and answer session, those still interested had to set up an appointment to complete an application. Another contractor required potential enrollees to take a tour after the application process. Following the tour, applicants attended a briefing and question and answer session, followed by one-on-one interviews with center staff. The value of preenrollment tours and briefings was also confirmed by Job Corps participants at two of the centers we visited who thought the tours and briefings were definitely worthwhile and by two regional directors who agreed that the preenrollment tours and briefings were very effective in preparing applicants for Job Corps and in improving program retention. These tours and briefings would help meet the law’s requirements that applicants be given a full understanding of Job Corps as well as what is expected of them after enrollment. Several regional directors commented on the importance of identifying applicants who are ready for Job Corps and can benefit from its training. For example, one regional director stated that because the program cannot afford to squander its resources on applicants who do not really want to be in the program, admissions counselors should ensure that applicants are ready and can benefit from the investment. Another regional director noted that because so many people are eligible for Job Corps (over 6 million) it was important to provide this opportunity to those most likely to benefit and that commitment should be “first and foremost” when assessing applicants. Another regional director agreed that commitment was important but considered the program’s Capability and Aspirations Assessment Tool to be ineffective in measuring it. In our analysis, we identified several characteristics associated with program retention that Labor might consider in designing outreach efforts, establishing priorities among applicants, or improving participant retention rates. Some of these characteristics would be of limited value nationwide, however, because so few participants nationwide had those characteristics. In addition, when considering how to use the results from our analysis, Labor also needs to consider other factors. Two of the characteristics most strongly related to the likelihood of remaining in the program were need for bilingual education and years of education. Of the characteristics we examined, the need for bilingual education had the strongest relationship with the likelihood of remaining in the program. Participants needing bilingual training—Spanish as well as other languages—were much more likely than others to remain in the program for at least 60 days. Education was also an important factor—participants with 12 or more years of education were more likely to remain than participants with 8 or fewer years of schooling. Another characteristic with a strong relationship to retention was age. Our analysis indicated that older participants had a greater likelihood than younger participants of remaining in the program. Specifically, when compared to 15-17-year-old participants, those aged 18 to 20 and 21 to 25 were more likely to remain in the program for at least 60 days. This analysis supported the concern expressed by many of the admissions counselors we interviewed regarding enrollment, retention, and placement of 16- and 17-year-old youths, who make up nearly 40 percent of the program year 1995 enrollees. The concerns they expressed were that these younger youths are often victimized by older participants at the center, have a harder time adjusting to center life, are more likely to drop out, and are difficult to place. Labor program year 1995 outcome data showed that 16- and 17-year-old terminees were less likely to be placed once they left the program (see fig. 1). Because of the difficulty in placing 16- and 17-year-old participants, one regional Labor official believed that the minimum age for enrollment should be increased, while another thought that there should be separate standards for these participants. In contrast, a third regional Labor official thought that maturity, and not age, should be the deciding factor for enrollment. He acknowledged, however, that the program should probably have different expectations and performance standards for 16-year-old participants. Another Labor official told us that a work group has been established to look into the problem of serving 16- and 17-year-old participants. Appendix IV discusses our statistical analysis of characteristics related to remaining in the program at least 60 days, including limitations associated with the analysis. Table IV.3 in that appendix contains the final model and significance levels. For example, it shows that other factors that had a significant relationship to the likelihood of remaining in the program for at least 60 days included residing less than 50 miles from the assigned Job Corps center, being a nonresidential student, having no dependents, and having served in the military. Additionally, some of the factors that proved to be useful predictors of remaining in the program were characteristics of only small subsets of participants. For example, because relatively few participants had a need for bilingual education (less than 3 percent of the Job Corps population), that characteristic was limited in its value as a feature for nationwide use in screening. Because we found no large subgroups with great differences, the ability of the model we used in our analysis to predict 60-day retention for the program’s full population is limited. In deciding how to use the results of this analysis, Labor would need to consider more than the statistical results. For example, it would clearly be inappropriate to use these findings to exclude applicants who met the statutory eligibility requirements because they had characteristics associated with a low likelihood of completing the program. If Labor chose to consider these characteristics in designing outreach efforts or establishing priorities for eligible applicants, it would be faced with the complexity of integrating these results with existing eligibility requirements and program policy. For example, our results showed that participants with at least 12 years of education were more likely to remain for 60 days than those with less education. Many youths with that many years in school, however, might not meet the eligibility requirement of needing additional education or training to secure and hold meaningful employment, participate successfully in regular school work, qualify for other suitable training programs, or satisfy armed forces requirements. The most clear-cut use of this information on participant characteristics may be in designing efforts to improve the retention rate of participants with characteristics associated with leaving the program early. Labor uses performance measures in deciding whether contractors are to continue to participate in the program. However, Labor does not have the information it needs to accurately assess the performance of its placement contractors. We found that two of the four measures Labor used in assessing placement contractor performance were not meaningful. One of the measures held contractors accountable for placing participants who were realistically unemployable. A second measure, relating to the placement of terminees in training-related occupations, included terminees who received little vocational training and also gave placement contractors wide latitude in deciding whether placements were related to training. Job Corps requires placement contractors to assist all terminees with placement regardless of how long they were in the program or the reason they left, and it has established the following standards to measure contractor performance: 70 percent of all terminees assigned to a contractor are to be placed, 70 percent of all placements are to be in full-time jobs, the average wage paid to participants placed in jobs is to be equal to or greater than a specified level, and 42 percent of all job placements are to be in occupations related to the training received. In calculating a contractor’s placement performance, Labor includes participants who remained in the program for as little as 1 day, those who were AWOL, and those who were expelled from Job Corps after 30 days for using drugs or committing violent acts—all individuals a placement contractor would have difficulty recommending for employment. During program year 1995, about one-third of the participants leaving Job Corps were in these categories. If Labor’s methodology were modified to include only participants who were in the program for sufficient time to obtain at least minimal benefits (that is, stayed for at least 30 days) and were employable (that is, were not terminated for drug violations and violence and were not AWOL), the average placement rate for the 12 placement contractors we visited would be about 8 points higher—ranging from an increase of 2.6 points for one contractor to 13.6 points for another contractor—and the rank order among the 12 contractors would change somewhat. (See fig. 2.) About half of the placement contractors we visited suggested that Labor should exclude certain individuals when calculating placement rates. For example, one contractor noted that it is unreasonable to expect contractors to recommend to an employer someone who was expelled for taking drugs or committing a violent act. Another contractor believed that it was a waste of resources to try to place participants who were AWOL because they were not only difficult to locate but also undependable to an employer. A third contractor suggested that Labor’s methodology include only participants who are truly employable. Similarly, a regional director stated that it is ridiculous to require placement specialists to be responsible for placing participants who stayed in the program a very short time, were expelled for drug use or violence, or were AWOL. He said that this responsibility asks the placement specialist to lie to employers by recommending they hire these people. Another regional director agreed that placement contractors should not be responsible for participants who received no benefit from the program or who were kicked out for violating the program’s drug and violence policies. The job-training match measure is used to evaluate the effectiveness of vocational training programs and placement contractors by determining the percentage of jobs terminees obtain that matches the training they received while in Job Corps. Labor allows placement contractors wide discretion in deciding whether a job placement they obtain for a terminee is related to the training received—another measure of performance. At the same time, Labor requires that terminees who receive little vocational training be included in the calculation of this measure. As a result, the value of the current job-training match performance measure is questionable. Labor is developing a new system to determine job-training matches that, it believes, will be more accurate. Labor’s guidance gives placement contractors wide latitude in deciding whether a job placement was a job-training match. According to Labor guidance, a job-training match results when a participant is placed in a job requiring skills similar to those included in the participant’s training. Placement contractors are responsible for recording this information. Labor’s guidance for these decisions consists of 16 broad categories of training programs, and within each category are a varying number of detailed occupations in which Job Corps participants may be trained. In addition, each of the 16 broad categories contains a list of jobs that would be considered a match with the training received. To illustrate, the broad training category of construction trades includes 47 detailed training occupations and 357 placement occupations. An individual who was trained in any one of the 47 training occupations and then was placed into any one of the 357 placement occupations would be counted as having made a job-training match. Overall, Labor’s system includes nearly 300 detailed training occupations and more than 5,700 job placement occupations. In addition to the wide range of jobs that are considered to be training matches under each of the broad training categories, Labor’s guidance includes jobs that appear to bear little, if any, relationship to the training received. For example, a position as a key cutter would be considered to be a training match for any of the 51 training categories under the broad category of mechanics and repairers, which includes auto mechanic, electronics assembler, and parts clerk. A position as a general laborer would be considered to be a job-training match for any of the 30 training occupations under the precision production category, which includes mechanical drafter, sheet metal worker, and welder. Table 2 lists examples of some possible matches under Labor’s guidance. Many of the positions that are considered to be related to Job Corps training require relatively little training to perform. The job placement occupational categories contained in Labor’s guidance for job-training match come from its Dictionary of Occupational Titles. The dictionary includes, for each occupation, the average time required to learn the techniques, acquire information, and develop the facility for average performance in a specific job situation. For more than 700 of the jobs included in Labor’s guidance, the average training time is indicated as either only a short demonstration or training up to and including 1 month. Thus, Labor is allowing job-training match credit for occupations requiring relatively short training time even though participants spend an average of about 7 months in the program at an average cost of about $15,300 each.While we recognize that some of these positions provide entry into an occupational area that may lead to a better job, in our view it is questionable to consider such positions to be a job-training match until the participant advances into a job commensurate with the training received. Further, Labor guidance encourages placement contractors to search among the allowable jobs for a job-training match. Its policy handbook states that, if a job-training match is not generated when a job placement code is entered in its automated system, the placement contractor is allowed to enter a different code that may generate a job-training match, “so long as integrity of data is maintained.” We found that placement contractors’ practice of recording job-training matches does indeed raise questions about the integrity of the data. One contractor told us that if a placement specialist obtained a job for a terminee that was not a job-training match under Labor’s guidance, then the manager and placement specialist would meet to determine how to make it a match. This same contractor claimed that it is possible to get a job-training match in fast-food restaurants for participants trained as bank tellers, secretaries, and welders. For the most part, the placement contractors we visited similarly indicated that creativity is used when entering the code for the placement job in order to obtain a job-training match and raised concerns about the validity of reported job-training match statistics. The job-training match performance measure may also unfairly hold placement contractors accountable for placing certain terminees in training-related jobs. All participants placed in a job or the military are included in the calculation of job-training match, regardless of how long they received vocational training. Thus, participants for a few days or weeks who had little chance to participate to any extent in vocational skill training would be included in the calculation of the job-training match measure. Most of the placement contractors and regional staff we spoke with agreed that when calculating this measure it would be more meaningful to include only participants who completed their vocational skills training. Labor officials told us that they are revising the methodology for determining job-training matches. The proposed methodology will use an existing system used by the Bureau of Labor Statistics to collect occupational employment data by various industry classifications. This system uses about 830 five-digit codes rather than the 5,700 nine-digit codes used in the current methodology based on the Dictionary of Occupational Titles. In its comments on a draft of our report, Labor acknowledged that we made a legitimate point about the need to strengthen the job-training match process. According to Labor, the proposed system will be more accurate and easier to maintain and monitor in terms of egregious job-training matches. Labor hopes to have implemented the new methodology by July 1, 1998. In addition, Labor stated that the job-training match issue is one of the primary projects being addressed by a Job Corps committee to improve the quality of vocational outcomes. We found that a characteristic common to the contractors we visited that had higher placement rates was having staff solely responsible for providing placement services to Job Corps participants. In addition, most of these placement contractors were either Job Corps centers or had staff located at the centers they served. In contrast, Labor regional officials have been concerned with the performance of state employment service agencies and have not renewed many of their contracts during the past 2 years. We also noted that Labor and several of the Job Corps centers we visited were starting to improve links to the business community in an effort to increase placements. The placement contractors we visited had had varying success in placing Job Corps participants in program year 1995. Placement included getting a job, entering the military, or returning full-time to school. The seven contractors that had relatively high placement rates (over 73 percent) included four Job Corps centers and three private organizations. A common characteristic among these contractors was having staff who had only one responsibility—placing Job Corps participants. Other contractors that were not as successful used the same staff to perform outreach and admissions as well as placement. One contractor whose staff performed these functions noted that with the program’s emphasis on maintaining centers at full capacity, placement is often secondary to admissions. We also noted that most of the contractors with higher placement rates were either Job Corps centers or had staff at the center. Placement specialists at the Job Corps center contended that being at the center allowed them easy access to instructors, counselors, and participants. One Labor regional director also mentioned the importance of having a continuity of services from the time enrollees arrive at the center until they are placed, noting that it was no accident that every center in his region also has a placement contract. In contrast, the placement contractor we visited with the highest placement rate was not a Job Corps center and did not have staff at a center. The program manager of this private company viewed Job Corps placement as a business and ran the organization accordingly—either placement specialists produced jobs for Job Corps participants or else the program manager found someone who could. Thus, having a focus on the ultimate goal—placement in a job—is a strategy associated with a high placement rate. One type of contractor that generally has not had high placement rates is state employment service agencies. Between program years 1994 and 1996, Labor regional offices did not renew two-thirds (12 of 18) of the placement contracts they had with state employment service agencies (see table 3). Labor officials in three regional offices informed us that they cancelled the placement contracts with state employment service agencies because of poor performance. A Labor official in a fourth region stated that the agency had sent a letter of concern to the state employment service agency because it was the worst-performing placement contractor in the region. Five of the six remaining state employment service placement contractors had placement rates in program year 1995 below the national Job Corps standard of 70 percent. Officials from two of the three state employment service agencies we visited expressed reservations about continuing to contract with Job Corps for placement services. For example, one employment service official said that the agency might not seek contract renewal because of its strained relations with Labor’s regional office. An official from another employment service commented that its Job Corps contract was really “small potatoes” and insufficient to provide for adequate staffing and that the only reason it was still involved was that the employment service commissioner believed that Job Corps was worthwhile and wanted to assist disadvantaged youths. An official from the third employment service agency we visited noted that the Labor regional office threatened to cancel its placement contract 2 years ago for poor performance and gave the agency another 6 months to improve. The official noted that, under new management, performance did improve and Labor renewed the agency’s contract for another 2 years. Placement specialists at the three employment service offices we visited stated that they have no contact with Job Corps participants before their termination. It also appeared that the major placement emphasis was to register Job Corps participants in the employment service databank. While this did provide access to a major source of potential jobs, it was the same service provided to regular job seekers using the employment service and was not any kind of specialized assistance. As pointed out by the Chairman of the Senate Subcommittee on Employment and Training, Committee on Labor and Human Resources, during hearings on Job Corps in April 1997, a key to program success is the development of links to the business community. However, concerns were raised about whether Job Corps had developed such links. We noted that several of the centers we visited that had higher placement rates also had good relationships with local businesses. For example, one center had established a physical therapy program to meet the needs of local health facilities, and another center used temporary agencies as a springboard for their computer services trainees to gain access to the area’s computer industry. A third center was working on improving its work experience component to better match participants’ skills and abilities to the needs of local businesses so that more permanent hires would result. Labor regional offices are also exploring ways to improve links to the business sector. For example, one office has recently started a business roundtable of 18 employers in the region who discuss placement issues. Another regional office has begun a project to get local employers involved with training and placement. The idea is to have employers identify what they need in terms of training curriculum, equipment, and skills and then determine how the program can meet these needs. Recognizing the importance of employer links, Labor has launched a new school-to-work initiative within Job Corps to involve more employers in placing program terminees and to establish the basic framework for a school-to-work program. It started as a pilot program at three Job Corps centers and will be expanded to 30 centers this year. Further expansion will depend on the availability of funding. Labor’s program guidance to admissions counselors on two eligibility requirements was ambiguous and incomplete. One of the program’s eligibility criteria—living in an environment characterized by disorienting conditions—has not been clearly defined in the statute, regulations, or Labor’s guidance. In addition, Labor has not provided adequate guidance regarding the requirement that participants have the capability and aspirations needed to complete and secure the full benefits of Job Corps. As a result, outreach and admissions contractors may not be enrolling the applicants who are most appropriate for the program. In the absence of specific Labor guidance, we noted that outreach and admissions practices varied among contractors. Those with higher participant retention rates tended to have better procedures to identify applicants who have the capability and aspirations to remain in and benefit from the program. A particularly effective tool in preparing applicants for Job Corps appeared to be preenrollment tours and briefings. Most admissions counselors expressed concern about the enrollment of 16- and 17-year-old applicants. Labor data confirm that these youths are more likely to drop out early for disciplinary reasons and less likely to be placed once they leave the program. Although Job Corps is a performance-driven program, the measures used to assess placement performance may not be meaningful and thus may not provide Labor with the information it needs to accurately assess placement contractor performance. Labor’s system for calculating a contractor’s placement performance included program terminees who were realistically unemployable. Determining what happens to every program participant is an important indicator of how well Job Corps is performing but not necessarily an appropriate measure of a contractor’s placement performance. Guidance related to another placement measure—the extent to which terminees were placed in training-related occupations—gave contractors such wide latitude when deciding whether a job was related to the training received that the validity of the measurement was questionable. In addition, the performance measure included terminees who received little vocational skills training and, therefore, were unlikely to be placed in jobs requiring an acquired skill. Labor is redesigning the methodology for determining job-training matches, which may help address some of these problems. However, any system would still be susceptible to manipulation by placement contractors without proper oversight and monitoring. To help ensure that Job Corps’ resources serve the most appropriate participants, we recommend that the Secretary of Labor provide clear and complete guidance on program eligibility criteria, ensuring that the guidance is consistent with the law, and provide better guidance to ensure that outreach and admissions contractors assess each applicant’s capability and aspirations to complete training and attain a positive outcome. Improvements are also needed to make the measures used to assess placement contractor performance more meaningful. Therefore, we recommend that the Secretary of Labor modify certain measures for placement contractors, including eliminate from the placement pool participants whom contractors realistically could not or should not be expected to place, such as participants who were expelled for criminal or violent behavior; replace the current job-training match system with one that captures realistic information and provide guidance to regional offices to ensure that the data are accurately recorded; establish separate placement performance standards for participants with different levels of program accomplishment—for example, those who completed program requirements and those who dropped out early. In Labor’s comments on a draft of this report, the agency disagreed with our recommendation that it clarify and expand its program eligibility criteria in order to ensure that they are consistent with the law. Labor stated that our report lacked acknowledgment of the detailed specifications for eligibility requirements developed over the years in conjunction with the Office of Inspector General and that the eligibility, verification, and documentation requirements contained in its policy handbook are detailed and specifically related to guidance for Job Corps admissions counselors. Labor gave no indication of any formal action it planned to take on this recommendation. Although Labor expressed some concern with our remaining recommendations, it acknowledged that they have merit, warrant consideration, and identify actions that the agency would take in response to them. Labor’s specific concerns with our report are in three broad areas—adequacy of program eligibility guidance, potential effect of additional assessment procedures, and recommended changes to placement performance measures, including training-related placements. Labor also pointed out a number of items in the draft report that it believed should be modified or clarified, and we acted on these, where appropriate. For example, we clarified that our discussion of the ambiguity of program eligibility guidance related to only 2 of the 11 criteria. We also made a number of other technical changes to our report to respond to Labor’s comments. Following is a summary of Labor’s concerns and our responses. Labor’s full comments are printed in appendix VI. Labor stated that our report lacked acknowledgment of the detailed specifications for eligibility requirements developed over the years in conjunction with the Office of Inspector General and that the eligibility, verification, and documentation requirements contained in its policy handbook are detailed and specifically related to guidance for Job Corps admissions counselors. Labor expressed concern with our characterization of the program eligibility guidance as inadequate. For example, regarding the lack of definition in Labor’s policy handbook for “limited job opportunities,” Labor commented that training conducted in program year 1995 for all admissions counselors included technical assistance material that defined this term as follows: “scarcity of jobs, commensurate with the skill levels of Job Corps-eligible youth and which has been designated as an area of substantial unemployment.” Labor added that “In essence, any applicant who lacks the specific skills required by the local labor market to obtain meaningful employment is a legitimate candidate for Job Corps.” Labor acknowledged that another eligibility factor—cultural deprivation—is not included in the policy handbook because more-specific factors—including (1) disruptive homelife, (2) unsafe or overcrowded dwelling, (3) disruptive community with high crime rates, and (4) limited job opportunities—were more useful to admissions counselors than the general term itself. Finally, Labor expressed concern with our discussion of the tool used in assessing another eligibility requirement—capability and aspirations. According to Labor, this assessment by its very nature must rely on the judgment of admissions counselors and determining aspirations is very difficult and challenging; Labor stated that the current assessment tool will be revisited and modified according to suggestions from regional offices and admissions counselors. We disagree that sufficient policy guidance defining “limited job opportunities” was provided to admissions counselors at a training seminar. Even if all admissions counselors at that time received such guidance, contractors and staff have since turned over. And, as mentioned in our report, admissions counselors we interviewed had different interpretations of “limited job opportunities,” indicating that something more is needed to ensure the consistent interpretation of limited job opportunities. Because Labor’s policy handbook was created to be “the single document containing all policy and requirements which would be: clear and concise, and up-to-date, and consistent with legislative provisions,” any definition of “limited job opportunities” that Labor develops should be incorporated into this policy handbook. In addition, the law states that environmental factors substantially impair an individual’s ability to succeed in training, not his or her ability to find employment. But Labor fails to explain the connection between its definition and the impairment of ability to succeed in training. And there is a separate eligibility requirement in the law that the applicant must “require additional education, training, or intensive counseling and related assistance in order to secure and hold meaningful employment . . . .” Labor’s interpretation of limited job opportunities appears to duplicate or at least overlap that separate requirement. Finally, Labor fails to explain how its definition satisfies the program regulations that stipulate that the environmental criteria are to be used in the context of residential versus nonresidential programs. Nowhere in its guidance does Labor mention this distinction. We also disagree that Labor provided adequate guidance regarding the term “cultural deprivation.” On the Job Corps application form, Labor not only lists each of the four factors it says define “cultural deprivation” as separate and distinct eligibility factors (any one of which would satisfy the eligibility requirement) but also adds the term “cultural deprivation” as a fifth factor that can be used to meet program eligibility. Guidance for completing the application form does not define this term and, as noted in our report, most of the admissions counselors we spoke with admitted that they did not know what the term meant. Furthermore, cultural deprivation cannot include disruptive homelife, as Labor says it does, because the law lists these as two separate environmental conditions. Regarding the eligibility requirement that participants have the capability and aspirations to complete and benefit from Job Corps, we agree with Labor that making such a determination is very difficult and challenging and, therefore, we believe that it is important that admissions counselors have guidance adequate to assist them in making these judgments. Furthermore, we agree with one regional official’s portrayal of the current assessment tool as a beginning step in providing guidance on this criterion. Accordingly, we support Labor’s decision to revisit this assessment tool and to obtain regional office and admissions contractors’ suggestions for improving it. With respect to assessment procedures, Labor agreed that Job Corps should not enroll youths who obviously have no desire to be in the program or capability to succeed and that assessing the interest and ability to benefit are important parts of the intake procedure. Labor also noted that participants’ leaving the program within the first 2 months is a cost that Job Corps must do whatever it can to minimize. However, Labor points out the need for a balance between this goal and the goal of serving youths who truly need the program, noting that overly strict assessment procedures could be a barrier to many severely disadvantaged youths. Furthermore, Labor states that the Congress clearly intended that Job Corps serve a severely at-risk population. Labor acknowledged that our report contained a number of positive suggestions (that is, “best practices”) that will be made available to outreach and admissions as well as placement contractors. Labor cautioned that the results of our analysis of characteristics associated with program retention could be misinterpreted because the report lacks the proper context. Labor further suggested that the detailed appendix related to this discussion be removed. Finally, Labor stated that the age data relating to participants who were 15 and 25 years old was inaccurate because Job Corps serves individuals aged 16 to 24. While we do not disagree that the program is to target persons most in need, the law states that the purpose is to assist youths who both need and can benefit from an intensive program. And the law requires that enrollees have the capabilities and aspirations to complete and secure the full benefits of the program. Several Labor regional directors commented on the importance of identifying applicants who are ready for Job Corps and can benefit from its training. For example, one director stated that with more than 6 million people eligible for Job Corps, admissions counselors have to identify those most likely to benefit from the program and that commitment should be first and foremost when they assess applicants. We also note that, in a previous report, we found that a key element of successful job-training projects was ensuring that participants are committed to training and to getting a job. Accordingly, we endorse Labor’s decision to make available to admissions contractors the procedures noted in our report that help identify the applicants who have the commitment and motivation to remain in and benefit from the program. We modified the report to provide our reasons for performing our analysis of characteristics associated with program retention and to highlight the limitations associated with our approach as well as the results. However, we do not believe the detailed appendix should be eliminated. In addition to describing our analysis and results in detail, it describes the related limitations. Regarding our mention of 15- and 25-year-old program participants being inaccurate, we obtained our data from Labor’s national database, which showed that less than 1 percent of program year 1995 enrollees were either 15 or 25 years old. We have added a relevant footnote. Labor expressed concern with our recommendation with respect to placement performance measures that Job Corps eliminate from the contractors’ placement pool individuals who realistically could not or should not be expected to be placed, such as those expelled from the program for using drugs or engaging in violent behavior. Labor believes that the program has the responsibility to provide placement services to all participants and that it is not asking placement contractors to mislead or lie to employers during placement. Labor further commented that the current placement measure resulted from a recommendation by its Office of Inspector General that all participants who leave the program should be included in the placement pool, thus creating incentives to keep students as long as possible. Labor acknowledged that the points we made in this portion of the report merit serious consideration and, therefore, it will convene a workgroup to discuss our recommendations and examine the incentives and disincentives resulting from any proposed changes to the performance management system. Labor also acknowledged that our report contained “some good points” with respect to training-related placements but expressed concern about our use of hypothetical examples of questionable job-training matches and the lack of data to indicate the degree to which these occur. Labor also commented that the claim by a contractor about obtaining a job-training match for participants trained as bank tellers, secretaries, and welders and placed in fast-food restaurants is inaccurate, noting that the system does not permit such matches. Although Labor may not be asking its placement contractors to lie to or mislead employers when attempting to place individuals who realistically could not be placed, by holding contractors responsible for placing individuals expelled for criminal or violent behavior, the program may be encouraging such practices. We agree with Labor that determining what happens to every participant is an important indicator of program performance, but we do not believe that it is necessarily an appropriate measure of a contractor’s placement performance. We also acknowledge that establishing an effective performance management system is complex and agree with Labor that, before any changes are made to this system, the incentives and disincentives should be thoroughly examined, and we commend Labor for its proposed action. We used “hypothetical” examples of job-training matches to illustrate the wide latitude Job Corps permits. Labor data were not available to identify the extent of abuse, but as we mentioned in the report, most placement contractors we interviewed indicated that creativity is used when entering codes for placement jobs, and they expressed their concern about the validity of reported job-training match statistics. In response to Labor’s contention that the system does not permit job-training matches for participants trained as bank tellers, secretaries, and welders who obtain jobs in fast-food restaurants, we agree that if such jobs were reported as “fast-food workers,” the system would not permit a job-training match. But, as a contractor we spoke with pointed out, reporting such jobs in fast-food restaurants as “cashier” would be an allowable match for participants trained as bank tellers and secretaries, and reporting such placements as “machine cleaners” would be an allowable match for participants trained as welders. As arranged with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 15 days after its issue date. We will then send copies to the Secretary of Labor, the Director of the Office of Management and Budget, relevant congressional committees, and others who are interested. Copies will be made available to others on request. If you or your staff have any questions concerning this report, please call me at (202) 512-7014 or Sigurd R. Nilsen at (202) 512-7003. GAO contacts and staff acknowledgments are listed in appendix VII. We designed our study to identify whether Labor’s policy guidance on eligibility was consistent with legislation and regulations and to collect information on contractors’ practices in enrolling individuals for the program and in placing them in jobs after they leave Job Corps. We reviewed Job Corps legislation as well as Labor’s program regulations and policy guidance on program eligibility, outreach and assessment of individuals for participation in the program, and placement of participants after termination. We also interviewed national and regional Job Corps officials and conducted site visits to 14 outreach, admissions, and placement contractors. We augmented the information we collected during the site visits with data from Labor’s Student Pay, Allotment, and Management Information System (SPAMIS), a database containing nationwide Job Corps data on all Job Corps participants as well as information on the outreach, admissions, and placement contractors for each participant. We analyzed program year 1995 enrollee data, the most recent full program year for which SPAMIS data were available. While we did not verify the accuracy of Labor’s SPAMIS data, we performed internal validity checks to ensure the consistency of the database. We performed our work between October 1996 and July 1997 in accordance with generally accepted government auditing standards. We visited 14 Job Corps outreach and assessment and placement contractors. We selected the sites judgmentally to provide a mixture of contractors that were private contractors, Job Corps centers, and state agencies. We also selected contractors that provided both outreach and assessment services and placement services or that provided only one of these services. In addition, we considered past contractor performance in making our selections. We selected contractors located in 5 of Labor’s 10 regions to provide some regional management diversity and geographic dispersion and to allow us to visit multiple contractors during individual trips. In making our site selections, we identified contractors that had outreach and admissions or placement contracts with Labor during program years 1994 and 1995 and that were still under contract in program year 1996. This provided us with contractors that had multiyear program experience and were currently under contract with Job Corps. In order to select among the larger contractors, we included only contractors who enrolled or were responsible for placing at least 150 participants each year. We then ranked outreach and admissions contractors according to the percentage of program year 1995 enrollees who stayed in the program for more than 30 days and placement contractors according to the percentage of program year 1995 terminees placed in jobs, school, the military, or other training. We then selected contractors from among the top, middle, and bottom third of these rankings. Table I.1 lists the contractors we visited and their characteristics. Table I.1: Outreach, Admissions, and Placement Contractors We Visited North Carolina Department of Human Resources Dynamic Educational Systems, Inc./Hubert H. Humphrey Job Corps CenterDynamic Educational Systems, Inc. Oklahoma City, Okla. Oklahoma Sacramento, Calif. Sacramento, Calif. Did not meet selection criteria (continued) Carson City, Nev. San Francisco, Calif. Women In Del Jen, Inc. We visited 11 outreach and admissions contractors from which varying percentages of program year 1995 enrollees left the program within the first 30 days. As shown in figure I.1, the percentages ranged from about 1 percent for one contractor’s enrollees to nearly 20 percent for another’s. We also selected 12 placement contractors to visit that had varying success in placing Job Corps participants in program year 1995. As shown in figure I.2, placement rates ranged from about 54 percent to about 85 percent. To obtain information on how contractors enroll individuals in Job Corps and place them after their termination, we interviewed contractor personnel using a semistructured interview protocol. We asked outreach and admissions contractors questions related to their practices and procedures in attracting youths to Job Corps and in screening applicants. We also asked about their understanding and implementation of program eligibility criteria as specified by Labor and about their views on what affects program retention. We questioned placement contractors on their procedures in placing terminees in jobs, the military, or other training; the types of services they provided to terminees; and their practices when deciding whether a placement is a job-training match. We asked both groups of contractors about their views on current Labor performance standards related to recruitment and placements and their opinions on improvements needed in the Job Corps program. At three centers (David L. Carrasco, Kittrell, and Sacramento), we also interviewed Job Corps participants (approximately six from each center) to learn about their experiences when they were recruited for Job Corps and to obtain their views about the enrollment process. We interviewed Labor officials at national and selected regional offices to obtain an overview of Job Corps enrollment, placement, and contracting. We also obtained information on Labor’s policy guidance on eligibility and how it relates to the Job Corps legislation; outreach, admissions, and placement contractors’ performance; and the program’s performance management system. In addition, we reviewed Labor’s Policy and Requirements Handbook, which was designed to include all program policy and requirements concerning eligibility criteria and policies and standards related to program enrollment and participant placement. We analyzed Job Corps participant retention data, reasons for termination, and placement information for program year 1995. We used 30-day retention data, part of Labor’s standard for evaluating outreach and admissions contractor performance, as a basis for selecting outreach and admissions contractors to visit. We expanded our analysis of retention beyond the 30-day standard and determined how many terminees left Job Corps within 60 days of enrollment in order to look at retention beyond the realm of outreach and admissions contractor performance. We also used one of Labor’s placement standards—the extent to which terminees are placed in jobs, the military, school, or other training—as a basis for selecting placement contractors to visit. Furthermore, we used the data from our analysis to supplement information obtained in discussions with admissions counselors and placement specialists. In our analysis, we examined the relationship between the characteristics of Job Corps participants and the likelihood of their remaining in the program for at least 60 days. We used the data that were available from Labor’s Student Pay, Allotment, and Management Information System (SPAMIS) on the characteristics of the more than 68,000 participants enrolled in Job Corps during program year 1995. We performed a three-stage analysis resulting in a logistic regression model that used these characteristics to predict the odds of a participant’s remaining in the program for at least 60 days. While the information from our analysis provides some indication of whether participants with specific characteristics will remain in Job Corps for at least 60 days, we do not intend to imply that only individuals with these characteristics should be enrolled in the program or that outreach and assessment efforts should be focused on them. Rather, this information is a source of insight into early program attrition for Labor’s use in Job Corps management. We also recognize that being in the program for at least 60 days indicates only longevity, not necessarily success. For our initial exploration of the data, we selected the participant characteristics from SPAMIS that appeared to be conceptually relevant to the likelihood of remaining in the program for at least 60 days. These included age at enrollment, distance between a participant’s home and the assigned Job Corps center, and educational status. We first used crosstabulations to examine the relationship of these variables to whether the participant remained in the program for 60 days. The chi-square statistics from these analyses showed the variables that seemed to exhibit no relationship to 60-day retention and helped us eliminate certain characteristics and select a set of variables for further analysis. The set of variables we selected is shown in table IV.1. No need for bilingual education Distance from home to center (continued) With these variables, we then performed a bivariate logistic regression to estimate the effects of each individual factor on remaining in Job Corps for at least 60 days. The results from the regression models are stated as odds ratios, which tell us how much more likely participants with certain characteristics are to remain in Job Corps for at least 60 days than participants without those characteristics. We give a chi-square test of significance for each of these odds ratios. To calculate the odds of a specific group remaining in Job Corps for at least 60 days, the percentage remaining and not remaining must be determined. For example, 26,687 participants aged 15-17 enrolled in Job Corps during program year 1995. As shown in table IV.1, 19,148 of these participants remained in the program for at least 60 days, while 7,539 did not. The odds of 15-17-year-old participants remaining in the program for at least 60 days were calculated by dividing the number remaining (19,148) by the number not remaining (7,539). Therefore, the odds for this group’s remaining were 2.54, meaning that 2.54 individuals remained for every 1 who did not. Similar calculations for participants aged 18 to 20 and 21 to 25 yield higher odds of 3.04 and 3.47, respectively. The logistic regression model provides us with odds ratios that tell us how different the odds are for each group and whether the differences are statistically significant. For example, when the 15-17-year-old group is used as a benchmark for comparing the two other groups, the resultant odds ratios are 3.04/2.54 = 1.20 and 3.47/2.54 = 1.37 for participants aged 18 to 20 and 21 to 25, respectively. Thus, the odds of 18-20-year-old participants remaining in Job Corps at least 60 days are 1.20 times the odds of 15-17-year-old participants, and the odds of 21-25-year-old participants remaining are 1.37 times the odds of 15-17-year-old participants. Odds ratios that deviate from 1.0 the most, in either direction, represent the most sizable effects (for example, odds ratios of 0.5 and 2.0 represent effects that are similar in size, since 0.5 indicates that one group is half as likely as the other to remain in the program for at least 60 days, while 2.0 indicates that one group is twice as likely as the other to remain). We performed this type of bivariate analysis for each characteristic we selected. The resulting odds ratios are shown under the “bivariate results” column of table IV.2. Distance from home to center Less than 50 miles vs. 300 miles or more 50-149 miles vs. 300 miles or more 150-299 miles vs. 300 miles or more (continued) Size of participant’s home city or town 250,000 or over vs. under 2,500 * Statistical significance = .05. After performing the bivariate analysis, we used the same set of variables in a multivariate logistic regression analysis, identical to the bivariate analysis except that it provides estimates of the effects of each characteristic on the likelihood of remaining in the program for at least 60 days while holding constant, or controlling for, the effects of the other characteristics. We included all factors (and levels of factors), even if their effects were not statistically significant in the bivariate analysis because, in some cases, effects that are suppressed in bivariate analysis emerge as significant in multivariate analysis. Similarly, effects that were significant in the bivariate analysis may be insignificant in the multivariate analysis. The results of the multivariate logistic regression are shown in column 2 of table IV.2 (“multivariate result”). As this column shows, when we entered all variables into the model, some variables and levels of variables had odds ratios that were not significantly different from the reference category. We dropped these variables or, in cases in which levels of variables were not significantly different from other levels within the same variable, we combined levels. For instance, in the multivariate model, the odds of remaining in Job Corps for at least 60 days for participants having a prior conviction were not significantly different from the odds of remaining for participants not having had a conviction. As shown in table IV.2, the odds ratio of .94 is not statistically significant. Therefore, we dropped this variable from subsequent analysis. Similarly, the odds of remaining for two levels of the variable “distance from home to center” (50-149 miles and 150-300 miles) were not significantly different from the odds of the reference category (over 300 miles). Therefore, we combined these two levels with the reference category to create a two-level variable for subsequent analysis. Thus, we included in the final model only the variables, and levels of variables, that were shown to be significant in the previous multivariate analysis. The results of this final model, as well as statistics related to how well the model performs, are shown in table IV.3. Model performance can be measured by the likelihood ratio method, which evaluates the probability of the observed results, given the parameter estimates. These results are shown under the –2 Log Likelihood (–2LL) entries in the note to table IV.3. As shown, the model containing the predictor variables shows an improved (smaller) –2LL compared with the model containing only the constant (that is, the model that assumes no differential effects resulting from individual variables). The model chi-square, which tests that the coefficients for all the terms in the model (except the constant) are 0 (that is, the null hypothesis), was significant at the .0000 level. Distance from home to center Less than 50 miles vs. 50 miles or more Size of participant’s home city or town Over 250,000 vs. under 250,000 * Statistical significance = .05. The results of our multivariate analysis revealed that older participants have greater odds of remaining in the program 60 days or more. When compared to 15-17-year-old participants, those aged 18 to 20 and 21 to 25 had odds of remaining that were 15-percent and 27-percent greater, respectively. In addition, we found that participants with 12 or more years of school had about 80-percent greater odds of remaining in Job Corps for at least 60 days than participants with 8 years or less of school. (See table IV.3.) We also found a relationship between the need for bilingual education and the likelihood of remaining in the program for at least 60 days. Of the variables we examined, the need for bilingual education yielded the highest odds ratio. Spanish-speaking participants needing bilingual training had odds of remaining that were almost twice the odds of those not needing bilingual education. Other non-English-speaking participants needing bilingual assistance had odds that were more than 3 times the odds of those not needing bilingual education. Our attempt to construct a model for predicting the characteristics of participants who are more likely to remain in the program for at least 60 days was limited by the variables available to us in Labor’s SPAMIS extracts. Most of these variables were demographic characteristics. We were unable to include in the analysis measures of such things as student ability, attitude, and motivation, as well as other characteristics that could potentially affect the likelihood of participants remaining in the program for at least 60 days. Additionally, the factors that proved to be the most useful predictors of remaining in the program for at least 60 days were characteristics of small subsets of participants. For example, there is evidence that participants in need of bilingual education are more likely to remain, but this group made up less than 3 percent of the Job Corps population. Similarly, participants who had completed 12 years or more of school had odds of remaining that were more than 80-percent greater than those of participants who completed 8 or fewer grades, but almost two-thirds of the participants were in neither of these groups. Consequently, while the model is very useful in predicting whether participants with specific characteristics will remain in Job Corps for at least 60 days, the model’s ability to predict 60-day retention for the program’s full population is limited because we found no large subgroups with great differences. Finally, in this analysis, we examined only main effects for the variables we investigated. An examination of the interactions among the variables might produce useful information and improve the predictive ability of the model. In addition to the contacts named above, the following persons made important contributions to this report: Thomas N. Medvetz, Wayne Dow, Deborah Edwards, Jeremiah Donoghue, Robert Crystal, and Sylvia Shanks. Job Corps: Where Participants Are Recruited, Trained, and Placed in Jobs (GAO/HEHS-96-140, July 17, 1996). Employment Training: Successful Projects Share Common Strategy (GAO/HEHS-96-108, May 7, 1996). Job Corps: Comparison of Federal Program With State Youth Training Initiatives (GAO/HEHS-96-92, Mar. 28, 1996). Job Corps: High Costs and Mixed Results Raise Questions About Program’s Effectiveness (GAO/HEHS-95-180, June 30, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Job Corps program's recruitment and placement contractors, focusing on: (1) whether Job Corps' policy guidance regarding eligibility criteria is consistent with the legislation and regulations; (2) how the use of recruiting contractors could be improved to increase participant retention in the program; and (3) how the use of placement contractors could be improved to enhance positive outcomes. GAO noted that: (1) Job Corps' policy guidance for 2 of the 11 eligibility criteria was ambiguous and incomplete, which has led to an eligibility determination process that fails to follow the requirements of the law and program regulations; (2) in GAO's visits to several outreach and admissions contractors, GAO found that those with higher retention rates follow procedures aimed at identifying applicants with the commitment and motivation to remain in and benefit from the program; (3) in GAO's analysis of participant characteristics, GAO identified certain characteristics significantly related to the likelihood of remaining in the program for at least 60 days; (4) the Department of Labor (DOL) could use some of these characteristics to design outreach efforts or to establish priorities among eligible applicants; (5) although Job Corps is a performance-driven program and DOL uses performance measures to make decisions on placement contractor renewal, two of the measures DOL used were not meaningful and, thus, DOL did not have the information it needed to accurately assess the performance of placement contractors; (6) placement measures held contractors responsible for placing individuals who may have received little or no benefit from the program or who demonstrated behavior that normally would be unacceptable to most employers; (7) the job-training match measure did not accurately portray the extent to which participants obtained jobs related to their vocational training because of the wide latitude placement contractors have in deciding whether a job is related to the training received and the creativity contractors used in recording the occupational titles of the jobs obtained; (8) one aspect of placement contractors' operations associated with better performance was having staff solely responsible for placing Job Corps participants; (9) seven contractors visited by GAO with high placement rates had staff solely responsible for placing Job Corps participants; (10) in contrast, four of the five contractors having lower placement rates had the same staff responsible for performing outreach and assessment as well as placement; and (11) as a result of its concern about performance, DOL has not renewed 12 of the 18 contracts with state agencies.
DOD’s major weapon systems rely more heavily on software to achieve their performance characteristics than ever before. According to information in a 2000 Defense Science Board Report, in the last 40 years, functionality provided by software for aircraft, for example, has increased from about 10 percent in the early 1960s for the F-4 to 80 percent for the F/A-22, which is currently under development. The reasons for this are simple: performance requirements for weapon systems have become increasingly demanding, and breakthroughs in software capability have led to a greater reliance on software to provide more capability when hardware limitations are reached. Along with this, DOD’s practice of expecting leaps in capability has placed extreme reliance on software development in most acquisitions. As DOD moves to more complex acquisitions—such as the integration of multiple systems in a single “system of systems”—understanding and addressing software development issues have become even more critical for DOD in order to control cost and deliver systems on time. We have issued a series of reports on the knowledge that leading commercial firms gain and use to manage and control the acquisition and development costs of their products. Leading firms attain knowledge early in the development process about the technology they plan to incorporate and ensure that resources match requirements. They make sure the design is mature before approving production and have production processes under control before production begins. Implicit in this approach to product development is the successful development of software. Software is rapidly becoming a significant, if not the most significant, part of DOD’s acquisitions. For example, software enables a missile to recognize a target; on some weapon systems, functionality as basic as flight is no longer possible without sophisticated software. In addition to successful commercial practices and other significant resources that have proven effective for managing software acquisition and development, DOD has at its disposal numerous reports and recommendations by industry experts to transform DOD’s software development process. This community of experts includes independent engineering teams, senior advisors on DOD’s Defense Science Board, and Carnegie Mellon University’s Software Engineering Institute. Although they have offered detailed guidance, DOD’s software-intensive weapon system acquisitions remain plagued by cost overruns, schedule delays, and failure to meet performance goals. DOD is an acquisition organization—that is, it acquires major weapon systems and manages the overall acquisition process as well as the contractors who are tasked with developing the systems and associated software. The more managers know about software development processes and metrics, the better equipped they are to acquire software. On DOD’s weapon system programs, the software development process is a part of the larger weapon system acquisition process. Software development has similar phases and—in the case of new systems—occurs in parallel with hardware development until software and hardware components are integrated. The following describes the four phases common to all software development: Determining requirements: Software development begins with performance requirements for the component or for the fully integrated product. Ideally, a team of system and software engineers, users, acquirers or their representatives analyzes the overall requirements—operational characteristics, user interfaces, speed, maneuverability, survivability, and usability—and translates them into specific requirements, allocating some to software and others to hardware. In more mature organizations, before making a commitment to develop a component or product, the software developer validates that the requirements allocated to software are realistic, valid, testable, and supportable. Management approves the requirements before the design phase begins. Systems engineering, a comprehensive technical management tool, provides the knowledge necessary to translate the acquirer’s requirements into specific capabilities. With systems engineering knowledge in hand, the acquirer and the developer can work together to close gaps between expectations and available resources—well before a program is started. Some gaps can be resolved by the developer’s investments, while others can be closed by finding technical or design alternatives. Remaining gaps—capabilities the developer does not have or cannot get without increasing the price and timing of the product beyond what the acquirer will accept—must be resolved through trade-offs and negotiation. The basic steps in systems engineering include the following: defining what the acquirer wants, how the final product is to be used, what the operating environment will be, and what the performance characteristics are; turning the requirements into a set of specific functions that the system must perform; and identifying the technical and design solutions needed to meet the required functions. Completion of these steps leads to a product design. Establishing a stable design: The software development team develops a design that meets the software’s desired functions. Numerous activities and documents typically are necessary to demonstrate that all of the software requirements are incorporated into a preliminary design and that functionality can be fully tested. The developer may construct a prototype for the acquirer to test the understanding of the requirements during the design phase. If management approves the preliminary design, the developer refines the design and managers conduct a critical design review before giving approval for the coding phase to begin. Manufacturing code: Software code translates requirements and a detailed design into an executable series of instructions. In more mature software development organizations, developers are required to follow strict coding practices. These include ensuring that the code is reviewed by knowledgeable peers addresses requirements specified in the final design and follows strict configuration control procedures to ensure that no “secret code” is put in the system and generally follows coding documentation guidelines that enable software engineers other than the coder to understand and maintain the software. Testing to validate that software meets requirements: To ensure that the design is ready for coding, testing activities start during the design phase and then continue through the coding phase. The testing of code is an important and critical phase and results in a series of quality-assurance tasks that seek to discover and remove defects that would hinder the software’s performance. Completing these tasks requires the testers to coordinate with various stakeholders, such as the quality assurance group, to define test criteria that sufficiently test the approved software requirements. Significant resources are available to DOD for improving its software acquisition outcomes. Among these is Carnegie Mellon University’s Software Engineering Institute, a federally funded research and development center. The Software Engineering Institute has identified specific processes and practices that have proven successful in fostering quality software development. The institute has constructed models for developing and acquiring software, developing and implementing software process improvement programs, and integrating hardware and software into a weapon system. To help organizations meet cost, schedule, and performance goals, the institute has issued guidance for adopting its models. The commercial firms we visited and DOD, both of which use the institute’s models, consider them to be an industry standard. The institute created the models to provide general guidance for software development and acquisition activities that programs can tailor to meet their needs. These models can also be used to assess an organization’s capability for developing or acquiring software. The Software Capability Maturity Model®, for example, focuses on improving software development processes. The model rates software maturity according to five levels of maturity: Initial: The software process is characterized as ad hoc. Success depends on individual effort. Repeatable: The basic process is in place to track cost, schedule, and functionality. Some aspects of the process can be applied to projects with similar applications. Defined: There is a standardized software process for the organization. All projects use some approved version of this process to develop and maintain software. Managed: The organization uses and collects detailed data to manage and evaluate progress and quality. Optimizing: Quantitative feedback about performance and innovative ideas and technologies contribute to continuous process improvement. In addition, the institute has created a model specifically for software acquisition. This model follows the same five principles as the previous model but emphasizes acquisition issues and the needs of individuals and groups who are planning and managing software acquisition activities. A third model focuses on the integration of hardware and software and has a heavier emphasis in systems engineering. (See appendix II for a description of the three models.) Despite acknowledgment of significant problems and access to extensive resources, DOD’s problems with software acquisition have continued. In 2000 the Defense Science Board’s Task Force on Defense Software reviewed selected DOD software-intensive systems and found that the programs lacked a well thought out, disciplined program management plan and software development process. The programs lacked meaningful cost, schedule, and requirements baselines, making it difficult to track progress. These findings are echoed by the work of DOD’s Tri-Service Assessment Initiative, an independent group that evaluates Army, Air Force, and Department of Navy programs’ software management processes and offers guidance for developing software in a disciplined manner. The Tri-Service Initiative found that three of the leading causes of problems in software-intensive systems are process capability, requirements management, and organizational management. A 1999 study performed by the Standish Group, an organization that researches risk, cost, and investment return for information technology investments, found that about one-third of software development programs—commercial or military—resulted in cancellation. Furthermore, in a series of studies completed through the 1990s, the group, found that the average cost overrun was 189 percent; the average schedule overrun was 222 percent of the original estimate; and, on average, only 61 percent of the projects were delivered with originally specified features or functions. To address its problems with weapon acquisition, including software- intensive weapon systems, DOD recently revised its requirements generation and acquisition policies to incorporate a more evolutionary framework and improve its ability to deliver more capability to the acquirer faster. Leading software companies we visited have been successful at software development largely because they establish a manageable product development environment, disciplined processes, and strong metrics to manage program outcomes. Key characteristics of a successful environment include evolutionary product development and continuous improvement of development capabilities so outcomes are more predictable. Within this environment, these companies use a structured management review process, and at the end of each of four key development phases—requirements, design, coding, and testing—the companies conduct reviews so that the development team does not progress to the next phase unless it attains a certain level of knowledge. A great deal of management attention is placed on the requirements-setting phase because missing, vague, or changing requirements tend to be a major cause of poor software development outcomes. Finally, leading developers we visited track cost and schedule outcomes with the help of a critical management tool, called earned value, a key indicator, or metric, for identifying and mitigating risk. In addition to earned value, developers use metrics for the size of a project, requirements, tests, defects, and quality to assess software development progress and to identify potential areas of improvement. Developers share this information with acquirers, who use the data to assess the risk software development has on overall product development and to make informed decisions about acquisitions. Figure 1 shows that a manageable environment, disciplined processes, and useful metrics are used together to form an effective process for software development. Three leading companies we visited—General Motors Powertrain Unit Motorola Global Software Group (GSG); and Teradata, a division of National Cash Register Corporation (NCR)—made a concerted effort to establish an environment that lowers risk and increases the chances of successful software development outcomes. This environment focuses on producing what is possible by establishing evolutionary product development while adhering to well-understood, well-defined, manageable requirements and encouraging continuous improvement of development processes. The environment enables leading companies to effectively compete in markets where delivery times are paramount and the acquirer expects reasonable prices and can go elsewhere with its business if not satisfied. Over time, these leading companies have learned that an evolutionary process emphasizing knowledge and quality enables successful outcomes. In comparison, an environment that allows too many risks, unknowns, and immature processes into product development can have poor outcomes. In high-risk, low-technology maturity environments, developers find themselves forcing software to meet unrealistic expectations. Officials at each of the companies we visited said that evolutionary product development is one of the fundamental elements of a manageable environment. Evolutionary development reduces risk because it allows software to be developed in small, manageable increments, with the availability of the complete software package coming later in the development life cycle. The General Motors Powertrain unit, which manufactures engines and transmissions, follows an evolutionary approach that calls for four to eight releases of the software product line each year. This approach offers many benefits, including allowing the software teams to restrict the size of projects to make them more manageable and to reduce risk. In addition, only well-defined requirements are included in the scope of the work, allowing the software teams to make improvements to previous releases. These leading companies consider continuous improvement to be an important part of their environment and culture, and most have implemented one of the Software Engineering Institute’s Capability Maturity Models®. They have found that ad-hoc processes make it impossible to gain a clear understanding of when and how defects occur and make it difficult to fix processes so that the same defects can be avoided in the future. Motorola GSG officials told us it is not enough to hire talented software developers to achieve successful outcomes. Rather, companies must establish the right environment and use disciplined processes to help developers work efficiently and then target their recruiting efforts toward staff who can work in a process-oriented environment. This is not an easy task. Companies must be willing to invest time and money to develop new processes, collect meaningful data on a consistent basis, and train employees to follow the processes and interpret the data. In addition, management must display a strong commitment toward implementing the improved processes. Within a low-risk, continuous improvement environment, leading companies we visited use a very structured, gated software development process that requires teams to obtain knowledge about the maturity of their software projects at key points in time. They plan, manage, and track activities for requirements, design, coding, and testing and rely heavily on such activities as configuration management, peer reviews, and quality assurance to help ensure the quality of their software. They also identify areas of risk and take actions to control the risks. Developers pay particular attention to the requirements-setting process because requirements are the foundation of a development effort. If requirements are not well defined or if there are too many changes, the result is additional, sometimes unmanageable risk. Figure 2 is a general depiction of the process used by the companies we visited to manage software development. There are four development phases: determining requirements, establishing a stable design, manufacturing code, and testing to validate that the software meets the requirements and to detect errors. Within each phase are key activities that must take place and knowledge, or information, that must be attained to pass a review and move to the next phase of development. In addition to the four software development phases, these companies consider quality assurance, configuration management, measurement, and analysis to be integral parts of their software development activities. These activities assist developers in adequately managing software projects and collectively give the developer and the acquirer a level of confidence that the software is being developed within cost, schedule, performance, and quality targets. For example, configuration management allows developers to maintain a historical perspective of each software version change, keep a record of the comments made about the changes, and verify the resolution of defects. Quality assurance activities are typically focused on detecting and resolving defects. However, some companies, like Motorola GSG, may assign responsibility for detecting and resolving defects to the project team and focus their quality assurance activities on evaluating whether project-associated work products adhere to the applicable process standards and procedures. In this case, quality assurance activities would also include ensuring that when the project teams do not comply with processes, these instances are identified, reported, and resolved at the appropriate level. Officials at each company we visited told us that the earlier defects are found and fixed, the less costly it is to the organization. If the defects are not found in the phase in which they occur, the cost to correct them grows in subsequent phases to the point where it could cost the company a significant amount of money to fix the problem once the software is fielded than if it had been corrected earlier. Senior managers at software development and acquisition companies we visited expect requirements to be managed and controlled before design work begins and virtually all lower-level design elements to be adequately defined before the start of coding. Without adequate definition and validation of requirements and design, software engineers could be coding to an incorrect design, resulting in missing functionality or errors. Motorola GSG, a communications company, and Teradata, a division of NCR that specializes in database technology, estimate that about 95 percent of their requirements are set by the end of the requirements phase and 98 percent by the end of the design phase. Officials view managing requirements as the most critical development task to ensure successful software outcomes. They said that many software problems, often referred to as defects, could be traced to missing, vague, or changing requirements. Although company officials stated that some requirements- related defects are inevitable, such as those that arise when requirements are not sufficiently detailed, they said significant time and effort are necessary to elicit and document all requirements and determine the appropriate sequence for meeting these requirements. Nevertheless, mature organizations take time to conduct the various activities to sufficiently document and validate requirements before proceeding to preliminary design. Leading software developers told us they typically devote about 20 to 30 percent of their software development time to requirements-setting activities. Doing so ensures that developers will be able to provide managers with key knowledge at the requirements review gate and show that requirements have been properly vetted with the acquirer and that they are achievable and well written. Activities they complete are highlighted below. Establish integrated project teams: Representatives from all acquirer and developer stakeholder groups use sound systems engineering techniques to establish software requirements. Categorize requirements: Acquirer and software team develop a comprehensive list of requirements and then categorize them on the basis of how critical they are to the product’s performance. Negotiate requirements: Software team develops resource and schedule estimates on the basis of system engineering knowledge and past projects of similar size and scope. The software team then advises the acquirer which requirements may have to be delayed or sacrificed on the basis of resource and schedule goals. Agree to requirements baseline: Software team and acquirer agree to a requirements baseline that details the software requirements, including cost, schedule, performance, and quality goals the software team is expected to achieve. Develop more detailed software requirements: Using systems engineering, software team breaks the requirements into lower-level requirements, discusses the requirements with the acquirer, and formally documents the more detailed requirements. Perform quality check: Organization performs quality checks on requirements-related documents, such as the functional requirements document, to ensure that requirements are written clearly and all of the acquirer’s requirements have been adequately addressed. Company officials stress that to develop effective software requirements, the acquirer and developer must work closely together and have open and honest discussions about what can and cannot be done within desired time frames. Motorola GSG officials, for example, emphasize the importance of a written requirements baseline agreement with the acquirer to solidify software requirements and then strict adherence to requirements agreed to in order to avoid cost and schedule growth. They also perform detailed quality reviews to detect requirements problems early and to avoid costly rework in later stages. Once developers establish requirements, they must also effectively manage the number and timing of requirements changes. Each developer we visited acknowledged that requirements could change at any point. However, officials told us that they aggressively manage requirements changes to make sure that they are reasonable and do not have a detrimental impact on project outcomes. For example, before making changes, they analyze the potential impact on cost, schedule, and performance and negotiate with the acquirer about whether the changes should be made within the ongoing project or in a future release. The negotiation usually involves preparing an impact report for review by the acquirer or a governing board. Teradata, a division of NCR, goes further by limiting the number of changes it will make during the development cycle. A stable design ensures that all requirements are addressed and that components and interfaces are defined. A Motorola GSG official stated that at least 90 percent of the company’s software designs are stable before coding and suggested that developers that do not effectively manage the design phase could spend as much as 40 percent of a project’s resources on rework activities. Leading companies complete a series of activities to stabilize their design and assure management that the software team is ready to advance to the next stage of development. These activities include, among other things, defining the overall functions and structure of the software on the basis of established requirements; selecting a system design; and developing the detailed system design specifications, which are sometimes referred to as the low-level design. Typically, software teams will have two management reviews during this phase of development. A preliminary design review is used to examine the design rationale and design assumptions to ensure that the resulting software systems will meet the stated requirements. Particular attention is given to high-priority aspects of the system, such as performance, security, maintainability, and system recovery. User manuals and software test plans may also be examined at this time. A critical design review is conducted once the detailed design of the software system has been completed. The purpose of this review is to examine all design features to determine if they meet the acquirer’s requirements. Throughout this phase companies typically perform peer reviews of design documents to detect errors and may also construct prototypes for the acquirers to test their understanding of the requirements. During the coding phase, software developers translate the requirements and design into a series of software steps that will control the system. According to company officials, well-written, achievable requirements, as well as very detailed designs, greatly enhance a software developer’s ability to create software with relatively few defects. Additional processes that are critical to the success of this phase include peer reviews, coding standards, frequent unit testing, access to a library of pre-coded and tested functionality, and use of programming languages that enable the software engineer to document the code to facilitate understanding at a later time. For example, the leading companies we visited rely heavily on previously developed software to reduce development time, costs, and testing. According to company officials, it is not uncommon for them to reuse 70 percent of previously developed software on a new project. General Motors Powertrain officials emphasized that reuse is a top consideration for their projects and they have developed a software product line that teams use to complete requirements, design, and coding activities. Over the past few years, they have also re-engineered some of their electronic modules to allow for greater standardization of components within and across their Powertrain portfolio. This has greatly enhanced their ability to reuse software. Testing is then performed to uncover defects or gaps in the code. Leading software companies we visited develop test plans after requirements are stable and take steps to ensure that there are one or more tests for each requirement. Through testing, teams assess the quality of the software to make it as defect-free as possible. For Motorola GSG, the software team is in control of all of the coding, testing, and quality-assurance activities. Officials stated that teams have access to online training and rely on libraries of previously used and tested code. They use peer reviews and inspections extensively during the requirements, design, and coding phases, for all software documents and test software and hardware components together to identify any integration problems that must be corrected. Leading developers we visited commonly use seven major types of metrics—cost, schedule, size, requirements, tests, defects and quality—to gauge a project’s progress and identify areas for improvement. Acquirers use some of these same metrics to assess whether the developer will be able to deliver the software within cost, schedule, performance, and quality parameters. We found that leading developers are relentless in their efforts to collect metrics to improve project outcomes and processes. The importance of metrics to these companies cannot be overemphasized. Motorola GSG and Teradata, a division of NCR, measure key aspects of software development for individual projects from the usual cost and schedule goals to process- improvement-type metrics that track the number and type of defects within each software development phase. They also have goals and metrics for companywide initiatives, such as cost-reduction efforts and customer satisfaction. Equally important, they have emphasized the critical nature of measuring processes, collecting metrics, and using them to analyze performance into their workforce through training. Table 1 provides an overview of the seven categories of metrics used by the leading developers we visited, examples of their specific metrics, and how the companies use the metrics to manage their projects. Company officials cautioned that a variety of metrics could be used to satisfy each category listed in table 1 and that no one set of specific metrics would necessarily apply to all companies. Rather, companies tailor metrics from each category to fit their own needs. Leading developers we visited use metrics from each category above to actively oversee their projects and continuously assess their processes and projects to identify opportunities for improvement. Motorola GSG, for example, uses a standard set of metrics to enable project managers, as well as other levels of management, to assess the status of their individual software projects, staff productivity, requirements volatility, cost and schedule estimation accuracy, and the effectiveness of their quality assurance processes. Management also uses the information to compare similar projects within a software center or across the company to identify trends and areas that can be improved. They are particularly interested in tracking the number of defects by software development phase, the amount of rework associated with correcting the defect, and the amount of project resources spent to ensure quality. For example, data from one project show that developers were able to find and correct 92 percent of their problems during the phase in which they occurred. The other 8 percent were corrected by the end of the system test phase, resulting in only 1 percent of total project resources being spent to correct defects. Motorola GSG uses an earned value management system to track the actual amount of time and effort it spends on project activities versus what it estimated for the projects. The earned value system, when properly implemented, provides developers and acquirers with early warnings of problems that could significantly affect the software project’s cost and schedule. For example, according to private industry research, once a project is over 15 percent complete, developers will be unable to make up any overruns incurred to that point and the overruns will be even greater once the project is finished. This is often because project planning typically underestimates the time and effort required to implement planned tasks. Motorola GSG uses a project time-tracking system to record the time spent on project activities attributed to the cost of quality and cost of poor quality metrics. The cost of quality metric tracks the amount of time and money spent on such activities as formal quality reviews, testing, defect prevention, and rework to ensure a reliable product. If more resources were expended on these activities than expected, Motorola GSG would identify the reasons for this occurrence and improve its processes to try to prevent overruns from happening again. The cost of poor quality is also a concern to Motorola GSG because it quantifies the amount of rework that was necessary to address any product nonconformance, such as defects before (internal failure) and after (external failure) releasing the software product to the acquirer. According to company officials, the cost of poor quality is a direct reflection of the effectiveness of a company’s software development processes. Generally speaking, poor processes lead to greater rework and a higher cost of poor quality, while better processes lead to a small amount of rework and a low cost of poor quality. Motorola GSG officials stated they have been able to hold the cost of poor quality (rework) to less than 5 percent for its projects by identifying when defects occur and then looking for improvements in their processes to try to prevent them from happening again. Acquirers also need the types of metrics presented in table 1 to plan, manage, and track overall product development. These types of metrics allow acquirers to make their own assessments of the status of the software development project, where the software project is headed, the potential risk that software presents to overall product development, and if the developer’s processes are effective in terms of reducing cost and schedule and improving quality. The earned value management system could provide acquirers with key information for calculating cost and schedule variations and also determining how much effort will be needed to complete a project on time when a project is behind schedule. If acquirers determine that software is likely to be late or over cost at completion, they then have the option to move some of the software requirements to a later development effort or allow the software development team more time to complete the project. In our reviews of five major DOD software-intensive weapon system acquisitions, we found mixed results. When DOD managers had a smaller, more evolutionary product with manageable requirements, used disciplined development process with gated reviews, and collected and used metrics to manage software development progress—such as the Tactical Tomahawk and the F/A-18-C/D programs—they delivered their product with less cost increase and less schedule delay. When DOD managers had expectations of developing revolutionary capabilities and did not use structured management reviews or collect and use metrics for software development—such as the F/A-22, SBIRS, and Comanche programs—they experienced significant cost growth and schedule delays. Table 2 illustrates how an evolutionary environment, effective process management, and use of meaningful metrics correlate with cost and schedule outcomes experienced by each program. The Tactical Tomahawk and F/A-18 C/D programs were developed in an evolutionary environment, engaged in extensive work on requirements, controlled requirements’ changes, collected and used detailed metrics to track development progress, and had less cost and schedule increase than the other programs we reviewed. The Navy’s Tactical Tomahawk missile will provide ships and submarines with enhanced capability to attack targets on land. New features include improved anti-jamming global positioning system, in-flight retargeting, and the ability to transmit battle damage imagery. Tomahawk program developers had disciplined development processes and used extensive peer reviews to discover defects and provided the acquirer with insight at each stage in development: requirements, design, code and test. They were responsible for collecting and reporting data on a monthly basis, relying on metrics—cost, schedule, effort, size, requirements, testing, and defects that are similar to those used by leading commercial firms. The program office managed the acquisition based on the trends found in these metrics. The F/A-18 C/D is a Navy attack fighter aircraft that has been deployed for a number of years. Periodically, the Navy upgrades the flight software to incorporate new features, add the capability to fire new munitions, and correct deficiencies discovered since the last upgrade. Working in an evolutionary environment, F/A-18 C/D program officials recognized that the success of the software upgrade to incorporate additional performance into the flight operations software depended on extensive requirements analysis before program start and firm control as requirements changed throughout development. This analysis ensured that the effort needed to meet requirements was well understood at the beginning of development, thus limiting the amount of redesign. Proposals for new requirements or changes to requirements after the program began were analyzed for cost, schedule, and performance impact. As with the Tomahawk program, FA-18 developers adhered to disciplined development processes, used extensive peer reviews to discover defects, and collected meaningful metrics to track progress. The F/A-22, SBIRS, and Comanche are complex programs that attempted to achieve quantum leaps in performance requiring extensive use of software rather than follow an evolutionary approach to software development. They all initially lacked controls over requirements, software processes, and metrics, causing major program upheavals. They encountered significant requirements changes, schedule slips, and cost increases because software defects were not discovered until later stages of the programs. Each of these programs has been restructured to incorporate requirements management controls, more-defined software development processes, and additional metrics. The Air Force’s F/A-22, originally planned to be an air dominance aircraft, will also have air-to-ground attack capability. It is expected to have advanced features, such as stealth characteristics, to make it less detectable to adversaries and capable of high speeds for long ranges. The F/A-22’s avionics are designed to greatly improve pilots’ awareness of the situation surrounding them. Early in the development process for the F/A-22, we reported that the program’s planned strategy for software development and acquisition was generally sound. We cited the Air Force’s plans to collect software costs and other software metrics to measure progress as examples of this sound strategy. At that time, we endorsed the program’s plans to be event- rather than schedule-driven. However, as early as 1994, many features of this sound strategy were not being followed. Delayed software deliveries contributed to cost increases and schedule delays. Requirements and design changes accounted for 37 percent of the critical problem reports leading to avionics shutdowns in the F/A-22, according to program office reports. Program officials and contractor personnel agreed that requirements volatility had been a problem; however, they were unable to provide any specific measure of requirements changes because they had not tracked the overall growth in software requirements since the first 3 years of the program. According to Lockheed Martin officials, the avionics system software is made up of 84 computer software configuration items, each of which accounts for a specific avionics function, such as the interaction between the pilot and the aircraft. In our discussion with contractor and program personnel, they stated that disciplined processes in requirements control, design, testing, and configuration management were not uniformly followed because of cost and schedule pressures. The F/A-22 software strategy also called for the collection of software metrics to measure costs. Program and contractor officials were unable to provide metrics for sufficient management visibility over the overall progress of the software. The contractor stated that the Air Force did not compile metrics from lower levels into major segments such as avionics. The Air Force’s SBIRS satellites are being developed to replace DOD’s older missile-warning satellites. In addition to missile warning and missile defense missions, the satellites will perform technical intelligence and battlespace characterization missions. Since the program was initiated in 1996, SBIRS has faced cost, scheduling, and technology problems. We have reported that SBIRS has experienced serious software design problems. Officials from Lockheed Martin, the prime contractor, stated that the program had uncontrolled requirements growth as well as overly optimistic expectations about reusing software from a previous program. Program and contractor officials agreed that deficient systems engineering and the scarcity of personnel in software engineering disciplines contributed to ineffective control and to not understanding how much of the previous software could be reused. These officials also stated that neither the program office nor the contractor had a change management control process in place to analyze change requests. A thorough analysis late in the program revealed that very little of the software could be reused. Furthermore, because of a deficiency in resources devoted to systems engineering, the total requirements for the system were not adequately defined. A report from an independent review team stated that more robust systems engineering could have precluded some of the problems. The report concluded that problems with the first SBIRS increment were primarily due to problems with software development and poor program execution. Peer reviews and engineering review boards were in place to monitor development, but, for reasons ranging from schedule pressures to reduced staffing, these decision bodies were ineffective. SBIRS contractor officials stated that they collected data on additions to requirements and on the number of lines of code, but because there were no restrictions on accepting new requirements and no control limits to the size of code, the metrics were not used to manage the project on a daily basis. The Army’s Comanche is a multi-mission helicopter intended to perform tactical armed reconnaissance. It is designed to operate in adverse weather across a wide spectrum of threat environments and provide improved speed, agility, reliability, maintainability, and low observability over existing helicopters. Since the program’s first cost estimate, originally approved in 1985, the research and development cost for Comanche has almost quadrupled, and the time to obtain an initial capability has increased from 9 to over 21 years. Several studies have identified software development as a problem area and highlighted requirements volatility and inadequate requirements analysis as having a large impact on the program. The lack of a disciplined process for Comanche’s software acquisition was also cited as a reason for program shortfalls; however, the exact percentage of cost growth attributed to software is not known because the program office lacked adequate visibility into the software development process and, therefore, has little historical data on software. Comanche officials stated that initially they did not require a uniform set of metrics from the contractor. They said they received earned value information from the contractor, but it combined software and hardware development data. All three programs have been restructured and have instituted changes to bring more knowledge into the programs. For example, F/A-22 program officials report that their contractors have teamed with divisions within their companies that have more disciplined processes and they are reporting fewer problems with the avionics software. SBIRS program officials stated that they have instituted more controls over requirements changes, requiring analysis and approval at higher levels. Comanche officials reported that the program office has quarterly software reviews to focus attention on software development progress with the contractor and has adopted an incremental, block development strategy for software development. Program officials stated that they have asked for more- detailed metrics by which to manage the programs. As a result of congressional requirements to initiate improvement plans and revisions to requirements and acquisition policies, DOD, the military services and MDA have created a more conducive environment for software acquisition and development. However, additional steps must be taken. We have found that leading software acquirers and developers we visited create disciplined software development processes and collect useful metrics for management oversight. These practices have proven to be a significant factor in their ability to achieve successful outcomes. DOD, the services, and MDA still lack controls in these areas that would put acquisition program managers in a better position to achieve successful program outcomes. The plans that the services and MDA have begun in response to congressional direction have varying levels of detail and are at various stages of approval within the organizations. The Army, for example, has completed and has begun to implement its plan. The plan includes using pilot programs to provide information on metrics, and the Army expects to team with the Software Engineering Institute to identify training needs and continuous improvement. MDA has prepared a detailed draft that includes forming a baseline assessment of each missile defense element and making recommendations to the program office for each element to adopt improvement processes. MDA expects the elements to begin work once the baseline assessment is complete. The Navy’s response includes teaming with the Software Engineering Institute to identify a course of action, including a training program for acquisition professionals and identifying software acquisition requirements and management initiatives. The Air Force has called for a working group to begin in March 2004 to baseline Air Force practices and to suggest a course of action. These efforts establish an environment of change for the services and provide a platform upon which to make additional improvements. Furthermore, they make explicit to software an evolutionary approach to systems development and acquisition that DOD included in the recently revised requirements generation and acquisition policies. However, the services’ and MDA’s planning does not include practices we found at leading commercial firms that enable those firms to have successful outcomes. Furthermore, the plans do not incorporate controls that would ensure that the plans now being formulated are incorporated into acquisition practice. The plans could be strengthened by adding specific criteria to ensure that requirements’ baselines based on systems engineering are documented and agreed to by both the acquirer and developer before a program’s initiation and that cost/benefit analyses are required when new requirements are proposed; software developers and acquirers make efforts to continually improve gated reviews and deliverables are integrated into the development developers collect and analyze metrics, including earned value to obtain knowledge about development progress and to manage risk. Army, Navy, Air Force, and MDA officials said they have high-level support for improving software acquisition and for the plans they are developing, and the Army and MDA stated that they had included funding for software improvements in their budgets. Officials at the leading companies we visited emphasized that strong management support is needed to ensure success with process improvements. Although DOD has embraced an evolutionary approach in its acquisition policy, DOD has not yet incorporated a requirement specific to software process improvement into the policy. Furthermore, DOD has not said how it will require individual program offices to follow the guidance once the services and MDA establish full-fledged programs to improve software development processes. Apart from the software acquisition improvement plans, DOD has taken some initiatives to strengthen software acquisition and development as well as address repeated performance shortfalls attributed to software. Since 1999 the Tri-Service Initiative has conducted detailed assessments of software-intensive programs to identity and mitigate software risks. The initiative has assessed about 50 programs spanning all military branches. While the results of individual initiatives are confidential to their programs, an overview shows three of the main causes of critical program performance problems: (1) the ability of the programs to establish and adhere to processes to meet program needs, (2) requirements management, and (3) organizational management. Process capability was a problem in 91 percent of case studies while problems with requirements management and organizational management were identified as problems 87 percent of the time. These findings are consistent with our discussions with leading companies about significant problem areas for software development management. This kind of information could prove useful to the military services and agencies as they plan for improving software acquisition. DOD has begun another initiative to strengthen the role that systems engineering plays in weapons system development as well as in software development. According to DOD officials, this initiative will include provisions for gated reviews of systems engineering baselines on an event-driven basis. Furthermore, the officials stated that they were working to incorporate the new systems engineering directives into acquisition policy. DOD has tasked a source selection criteria working group with clarifying policy regarding source selection criteria for software-intensive systems, and another working group is creating a clearinghouse for best practices. The source selection criteria working group is discussing the application of software product maturity measures, and the Software Intensive Systems office is developing a proposal for a centralized clearinghouse of software best practices, but these initiatives are not complete. To provide a better method of estimating the cost of software, DOD added a requirement to its acquisition policy to report such information as type of project, size, effort, schedule, and quality data to the Cost Analysis Improvement Group. DOD policy requires the Software Resource Data Report for major defense programs for any software development element with a projected software effort greater than $25 million. Organizations we visited that have established a strong, consistent, evolutionary environment and practices for setting product requirements, maintaining a disciplined development process, and using metrics to oversee development progress achieve favorable cost, schedule, and quality outcomes for software projects. These practices limit development efforts to what can be managed and result in decisions throughout the development process that are based on knowledge obtained through systems engineering that is sufficient to adequately gauge risks. The organizations we visited made business decisions to invest time and resources in achieving high process maturity levels to improve these practices. For the most part, in the programs reviewed, DOD garnered poor results from its software acquisition process because it has not employed consistent practices in these areas. Much as we have found in DOD’s overall acquisition management process, the decisions to begin programs and to make significant investments throughout development are made without matching requirements to available resources and without demanding sufficient knowledge at key points. The acquisition programs we reviewed that used evolutionary environments, disciplined processes, and managed by metrics were more successful, and the programs that did not use these practices were less successful. DOD has attempted to improve acquisition outcomes by establishing a framework for an evolutionary environment in its requirements generation and acquisition policies that develops manageable increments of capability. This is a positive step. However, DOD’s policies do not contain the controls needed to ensure individual programs will adhere to disciplined requirements and development processes, nor do they include the metrics needed to do so. As DOD works to finalize its software process improvement plans, it has the opportunity to put in place those practices that have proven successful in achieving improved outcomes for software- intensive systems. In moving into a more complex, “system of systems” acquisition environment, much more will be demanded from software. The need for consistent practices and processes for managing software development and acquisition will become paramount if DOD is to deliver capabilities as promised. We have previously made recommendations to DOD to adopt certain specific practices developed by the Software Engineering Institute. As DOD changes the way it manages software intensive systems, it must take steps to ensure better acquisition outcomes. We recommend the Secretary of Defense take the following four actions: To assure DOD appropriately sets and manages requirements, we recommend that DOD document that software requirements are achievable based on knowledge obtained from systems engineering prior to beginning development and that DOD and the contractor have a mutual understanding of the software requirements. Furthermore, we recommend that trade-off analyses be performed, supported by systems engineering analysis, considering performance, cost, and schedule impacts of major changes to software requirements. To ensure DOD acquisitions are managed to a disciplined process, acquirers should develop a list of systems engineering deliverables (including software), tailored to the program characteristics, and based on the results of systems engineering activities that software developers are required to provide at the appropriate stages of the system development phases of requirements, design, fabrication/coding, integration, and testing. To ensure DOD has the knowledge it needs to oversee software-intensive acquisitions, we recommend that acquirers require software contractors to collect and report metrics related to cost, schedule, size, requirements, tests, defects, and quality to program offices on a monthly basis and before program milestones and that acquirers should ensure that contractors have an earned value management system that reports cost and schedule information at a level of work that provides information specific to software development. These practices should be included and enforced with controls and incentives in DOD’s acquisitions policy, software acquisition improvement plans and development contracts. DOD provided us with written comments on a draft of this report. The department concurred with two of the recommendations, subject to our incorporating some minor revisions. Since the suggested revisions did not materially change the intent of the recommendations, we revised them. For two other recommendations, the department partially concurred. The department agreed that the report provides useful insight for improving the software acquisition process and is consistent with its efforts to improve the process as it continues to implement section 804 of the Fiscal Year 2003 National Defense Authorization Act. It also agreed to take the report’s findings into account as it monitors the process for continuous improvement and to apply our recommendations as further guidance to its component services and agencies. The department further noted that the techniques highlighted in the report should not be seen as a panacea. We agree. Our report provides evidence that acquisitions can succeed if they take place in an evolutionary environment rather than an environment that requires complex solutions for a single quantum leap in software capabilities. To augment an evolutionary environment, requirements must be carefully managed and existing systems and software engineering knowledge must be taken into account, the development processes must be disciplined and transparent to decision makers, and key metrics must be gathered and used to support decisions. We disagree with the department’s observation that the report “plays down significant challenges associated with acquisition of complex defense systems .…” To the contrary, our report highlights those challenges as inherent to acquisitions that proceed with limited knowledge about how to achieve quantum leaps in capability in a single acquisition. Our comparison of two successful evolutionary programs (Tactical Tomahawk and F/A-18 C/D, both categorized as major defense acquisition programs) with three revolutionary programs (F/A-22, SBIRS, and Comanche) shows different outcomes in terms of cost, schedule, and delivery of equipment to the warfighter. DOD’s rationale for providing programs with data less frequently than we recommended in our third recommendation suggested that data did not create knowledge and that knowledgeable software professionals are needed to interpret data. We agree that both knowledgeable people and data are needed, but those professionals must have data to interpret. We found that initially the F/A-22, SBIRS, and Comanche programs had knowledgeable staff but little data to analyze. DOD indicated that it was already addressing software acquisition in policy in response to the fourth recommendation and cited multiple sections of DOD Directive 5000.1 as evidence. We do not agree that the current policy puts adequate controls in place to improve software practices to a level achieved by leading commercial companies. DOD is silent about including incentives in contracts for improving software processes. The department’s comments are printed in appendix I. To determine the best practices commercial companies use to manage software development and acquisition, we first conducted general literature searches. From these literature searches and discussions with experts, we identified numerous companies that follow structured and mature processes for software development and acquisition. We visited the following commercial companies: Computer Sciences Corporation (CSC) develops individual business solutions for commercial and government markets worldwide. The company is specialized in management and information technology consulting, systems consulting and integration, operations support, and information services outsourcing. In 2003, the company generated revenues of $11.3 billion. We visited CSC’s Federal Sector office in Moorestown, New Jersey, and discussed its practices for developing and acquiring commercial and federal software. The Federal Sector unit has achieved a Level 5 Capability Maturity Model rating. Diebold, Incorporated manufactures self-service products, such as automated teller machines, electronic and physical security products, and software and integrated systems. In 2002 the company reported revenues of $1.9 billion. We visited the company’s headquarters in North Canton, Ohio, and discussed the process it uses to develop software for automated teller systems. General Motors, the world’s largest vehicle manufacturer, designs, builds, and markets cars and trucks worldwide. In 2002 the company reported total net sales of $186.7 billion. We spoke with representatives from the Powertrain Group to discuss the processes used to develop and acquire electronic controls. Motorola GSG provides integrated communications and embedded electronic solutions, such as wireless phones, two-way radio products, and internet-access products to consumers, network operators, commercial, government, and industrial customers. In 2002 the company reported net sales of $26.7 billion. We visited its Global Software Group offices in Montreal, Canada, and discussed the company’s software and product development processes. The Global Software Group has achieved a Level 5 Capability Maturity Model rating. NCR offers solutions for data warehousing, retail store automation, and financial self-services. In 2002 the company reported sales totaling approximately $5.6 billion. We visited the Teradata Data Warehousing group office in San Diego, California, and discussed the software development process for the company’s Teradata database software. The Teradata unit has achieved a Level 4 Capability Maturity Model rating. Software acquisition covers myriad activities and processes from planning and solicitation, to transition, to the support of a developed product. In fact, the Software Engineering Institute’s Capability Maturity Models (CMM)® for software acquisition and development delineate more than a dozen different processes of this nature and offer principles governing the goals, activities, necessary resources and organizations, measurements, and validation of each process. This report does not attempt to judge software acquisitions against all of those processes. Instead, our scope targets practices in three critical management areas we identified as problem areas from our previous work on weapon systems acquisitions and through discussions with leading companies. We limited our focus to ways to develop an environment that encourages continual improvement; improve the management of software development processes, including software requirements; and metrics to improve overall weapon system acquisition outcomes. In doing so, we borrowed criteria from each CMM® that offered a road map for continuous improvement in each of those specific areas. At each of the five companies, we conducted structured interviews with representatives to gather uniform and consistent information about the practices, processes, and metrics that each company uses to manage software development and software acquisition. During meetings with representatives, we obtained a detailed description of the practices and processes they use to develop software within cost and schedule and ensure quality. We also consistently used a structured data collection instrument to collect metrics from the companies on their software projects. We met with company directors, software engineers, project managers, configuration managers, and quality assurance personnel. Our report highlights several best practices in software development and acquisition on the basis of our fieldwork. As such, they are not intended to describe all practices or suggest that commercial companies are without flaws. Representatives from the commercial companies we visited told us that their practices have evolved over many years and that they continue to be improved on the basis of lessons learned and new ideas and information. This is not to say that the application and use of these practices have always been consistent or without error or that they subscribe to a single model for their practices and processes. However, they strongly suggested that the probability of success in developing and acquiring software is greatly enhanced by the use of these practices and processes. We also selected five DOD weapon systems: RAH-66 Comanche, F/A-22, F/A-18 C/D, SBIRS, and Tactical Tomahawk. These systems are at various stages of development. We compared the practices, processes, and metrics the programs were using to manage software development and acquisition with the best practices commercial companies use. To identify the current policy, processes, and acquisition practices used in software development, for each program we visited, we conducted structured interviews with representatives from the program office and prime contractors Boeing Sikorsky for Comanche; Lockheed Martin, Marietta, Georgia, for F/A-22; and Lockheed Martin, Boulder, Colorado, for SBIRS. We also used a data collection instrument to determine which metrics program offices were collecting. We selected Air Force, Army, and Navy programs because they all manage major defense acquisition programs. We also obtained the responses to date that the services and MDA have prepared in response to section 804 of the Bob Stump National Defense Authorization Act for Fiscal Year 2003. The legislation states that the Secretary of each military service and the head of each defense agency that manages a major defense acquisition program with a substantial software component shall establish a program to improve the software acquisition processes of that military service or defense agency. To determine how DOD responded to Congress’s requirement, we met with DOD officials from the Tri-Service Assessment Initiative and the Software Intensive Systems Office and the staff responsible for developing the process improvement plans for the Air Force, Army, Department of the Navy, and MDA. We also met with officials from the Office of the Under Secretary of Defense (Acquisition, Technology and Logistics) concerning systems engineering initiatives and officials from the Office of the Assistant Secretary of Defense (Networks and Information Integration) concerning the software improvement plans. Because the plans are in varying stages of completeness, we did not evaluate to what degree the military services and MDA have complied with section 804. To determine whether the responses so far would help improve DOD’s software acquisition, we evaluated the responses on the basis of the information we obtained from leading organizations concerning environment, disciplined processes, and collection of meaningful metrics. We conducted our review between March 2003 and February 2004 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Air Force, Army, and Navy; the Director of the Missile Defense Agency; and the Director of the Office of Management and Budget. We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-4841 if you have any questions concerning this report. Other key contributors to this report were Cheryl Andrew, Beverly Breen, Lily Chin, Ivy Hubler, Carol Mebane, Mike Sullivan, Sameena Nooruddin, Marie Penny Ahearn, Madhav Panwar, and Randy Zounes. The Capability Maturity Model for Software (SW-CMM)® describes the principles and practices underlying software process maturity and is intended to help software organizations improve the maturity of their software process in terms of an evolutionary path organized into five maturity levels. Except for level 1, each maturity level is decomposed into several key process areas that indicate the areas that an organization should focus on to improve its software process. Table 3 describes the characteristics of each level of process maturity and the applicable key process areas. The Software Acquisition Capability Maturity Model (SA-CMM)® is a model for benchmarking and improving the software acquisition process. The model follows the same architecture as SW-CMM® but with a unique emphasis on acquisition issues and the needs of individuals and groups who are planning and managing software acquisition efforts. Each maturity level indicates an acquisition process capability and has several Key Process Areas. Each area has goals and common features and organizational practices intended to institutionalize common practice. In 1997 a team led by DOD, in conjunction with Software Engineering Institute, government, and industry, concentrated on developing an integrated framework for maturity models and associated products. The result was the Capability Maturity Model Integration (CMMI)®, which is intended to provide guidance for improving an organization’s processes and the ability to manage the development, acquisition, and maintenance of products and services while reducing the redundancy and inconsistency caused by using stand-alone models. CMMI® combines earlier models from Software Engineering Institute and the Electronic Industries Alliance into a single model for use by organizations pursuing enterprise-wide process improvement. Ultimately, CMMI® is to replace the models that have been its starting point. Many integrated models consist of disciplines selected according to individual business needs. Models can include systems engineering, software engineering, integrated product and process development, and supplier sourcing. There are also two representations of each CMMI® model: staged and continuous. A representation reflects the organization, use, and presentation of model elements. Table 5 shows the CMMI® model for staged groupings. (1) The Secretary of each military department shall establish a program to improve the software acquisition processes of that military department. (2) The head of each Defense Agency that manages a major defense acquisition program with a substantial software component shall establish a program to improve the software acquisition processes of that Defense Agency. (3) The programs required by this subsection shall be established not later than 120 days after the date of the enactment of this Act. (b) Program Requirements.—A program to improve software acquisition processes under this section shall, at a minimum, include the following: (1) A documented process for software acquisition planning, requirements development and management, project management and oversight, and risk management. (2) Efforts to develop appropriate metrics for performance measurement and continual process improvement. (3) A process to ensure that key program personnel have an appropriate level of experience or training in software acquisition. (4) A process to ensure that each military department and Defense Agency implements and adheres to established processes and requirements relating to the acquisition of software. (c) Department of Defense Guidance—The Assistant Secretary of Defense for Command, Control, Communications, and Intelligence, in consultation with the Under Secretary of Defense for Acquisition, Technology, and Logistics, shall— (1) prescribe uniformly applicable guidance for the administration of all of the programs established under subsection (a) and take such actions as are necessary to ensure that the military departments and Defense Agencies comply with the guidance; and (2) assist the Secretaries of the military departments and the heads of the Defense Agencies to carry out such programs effectively by— (A) ensuring that the criteria applicable to the selection of sources provides added emphasis on past performance of potential sources, as well as on the maturity of the software products offered by the potential sources; and (B) identifying, and serving as a clearinghouse for information regarding, best practices in software development and acquisition in both the public and private sectors. (d) Definitions—In this section: (1) The term “Defense Agency” has the meaning given the term in section 101(a)(11) of title 10, United States Code. (2) The term “major defense acquisition program” has the meaning given such term in section 139(a)(2)(B) of title 10, United States Code. Defense Acquisitions: DOD’s Revised Policy Emphasizes Best Practices, but More Controls Are Needed. GAO-04-53. Washington, D.C.: November 10, 2003. Best Practices: Setting Requirements Differently Could Reduce Weapon Systems’ Total Ownership Costs. GAO-03-57. Washington, D.C.: February 11, 2003. Best Practices: Capturing Design and Manufacturing Knowledge Early Improves Acquisition Outcomes. GAO-02-701. Washington, D.C.: July 15, 2002. Defense Acquisitions: DOD Faces Challenges in Implementing Best Practices. GAO-02-469T. Washington, D.C.: February 27, 2002. DOD Information Technology: Software and Systems Process Improvement Programs Vary in Use of Best Practices. GAO-01-116. Washington, D.C.: March 30, 2001. Best Practices: Better Matching of Needs and Resources Will Lead to Better Weapon System Outcomes. GAO-01-288. Washington, D.C.: March 8, 2001. Best Practices: A More Constructive Test Approach Is Key to Better Weapon System Outcomes. GAO/NSIAD-00-199. Washington, D.C.: July 31, 2000. Defense Acquisition: Employing Best Practices Can Shape Better Weapon System Decisions. GAO/T-NSIAD-00-137. Washington, D.C.: April 26, 2000. Best Practices: DOD Training Can Do More to Help Weapon System Program Implement Best Practices. GAO/NSIAD-99-206. Washington, D.C.: August 16, 1999. Best Practices: Better Management of Technology Development Can Improve Weapon System Outcomes. GAO/NSIAD-99-162. Washington, D.C.: July 30, 1999. Defense Acquisitions: Best Commercial Practices Can Improve Program Outcomes. GAO/T-NSIAD-99-116. Washington, D.C.: March 17, 1999. Defense Acquisition: Improved Program Outcomes Are Possible. GAO/T-NSIAD-98-123. Washington, D.C.: March 17, 1998. Best Practices: DOD Can Help Suppliers Contribute More to Weapon System Programs. GAO/NSIAD-98-87. Washington, D.C.: March 17, 1998. Best Practices: Successful Application to Weapon Acquisition Requires Changes in DOD’s Environment. GAO/NSIAD-98-56. Washington, D.C.: February 24, 1998. Best Practices: Commercial Quality Assurance Practices Offer Improvements for DOD. GAO/NSIAD-96-162. Washington, D.C.: August 26, 1996.
The Department of Defense (DOD) has been relying increasingly on computer software to introduce or enhance performance capabilities of major weapon systems. To ensure successful outcomes, software acquisition requires disciplined processes and practices. Without such discipline, weapon programs encounter difficulty in meeting cost and schedule targets. For example, in fiscal year 2003, DOD might have spent as much as $8 billion to rework software because of quality-related issues. GAO was asked to identify the practices used by leading companies to acquire software and to analyze the causes of poor outcomes of selected DOD programs. GAO also was asked to evaluate DOD's efforts to develop programs for improving software acquisition processes and to assess how those efforts compare with leading companies' practices. Software developers and acquirers at firms that GAO visited use three fundamental management strategies to ensure the delivery of high-quality products on time and within budget: working in an evolutionary environment, following disciplined development processes, and collecting and analyzing meaningful metrics to measure progress. When these strategies are used together, leading firms are better equipped to improve their software development processes on a continuous basis. An evolutionary approach sets up a more manageable environment--one in which expectations are realistic and developers are permitted to make incremental improvements. The customer benefits because the initial product is available sooner and at a lower, more predictable cost. This avoids the pressure to incorporate all the desired capabilities into a single product right away. Within an evolutionary environment, there are four phases that are common to software development: setting requirements, establishing a stable design, writing code, and testing. At the end of each of these phases, developers must demonstrate that they have acquired the right knowledge before proceeding to the next development phase. To provide evidence that the right knowledge was captured, leading developers emphasize the use of meaningful metrics, which helps developers, managers, and acquirers to measure progress. These metrics focus on cost, schedule, the size of a project, performance requirements, testing, defects, and quality. In a review of five DOD programs, GAO found that outcomes were mixed for software-intensive acquisitions. The F/A-18 C/D, a fighter and attack aircraft, and the Tactical Tomahawk missile had fewer additional cost and schedule delays. For these programs, developers used an evolutionary approach, disciplined processes, and meaningful metrics. In contrast, the following programs, which did not follow these management strategies, experienced schedule delays and cost growth: F/A-22, an air dominance aircraft; Space- Based Infrared System, a missile-detection satellite system; and Comanche, a multimission helicopter. In response to congressional requirements, DOD, the military services, and the Missile Defense Agency have taken positive steps to improve the environment for acquiring software-intensive systems. However, their plans to implement software process improvement programs are not yet complete and more work is required to ensure controls that would help managers increase the chances of successful acquisition outcomes. Such controls include documenting baseline requirements agreements between the developer and acquirer that leverage systems engineering knowledge, meeting with the developer for periodic reviews (gates) during the development process, and obtaining meaningful metrics from the developer to manage the program. Furthermore, there are no assurances that program managers will be held accountable for using the plans once they are completed.
TSA has made progress in meeting the 9/11 Commission Act air cargo screening mandate as it applies to domestic cargo, and has taken several key steps in this effort, such as increasing the amount of domestic cargo subject to screening, creating a voluntary program to allow screening to take place at various points along the air cargo supply chain, and taking steps to test air cargo screening technologies, among other actions. However, TSA faces several challenges in fully developing and implementing a system to screen 100 percent of domestic air cargo, including those related to industry participation and technology. TSA has taken several steps to address the air cargo screening mandate as it applies to domestic cargo including the following. TSA increased the amount of domestic cargo subject to screening. Effective October 1, 2008, TSA established a requirement for 100 percent screening of nonexempt cargo transported on narrow-body passenger aircraft. In 2008, narrow-body flights transported about 24 percent of all cargo on domestic passenger flights. Effective February 1, 2009, pursuant to the 9/11 Commission Act, TSA also required air carriers to ensure the screening of 50 percent of all nonexempt air cargo transported on all passenger aircraft. Furthermore, effective May 1, 2010, air carriers were required by TSA to ensure that 75 percent of such cargo was screened. TSA also eliminated or revised most of its screening exemptions for domestic cargo. TSA created a voluntary program to facilitate screening throughout the air cargo supply chain. Since TSA concluded that relying solely on air carriers to conduct screening would result in significant cargo backlogs and flight delays, TSA created the voluntary Certified Cargo Screening Program (CCSP) to allow screening to take place earlier in the shipping process, prior to delivering the cargo to the air carrier. Under the CCSP, facilities at various points in the air cargo supply chain, such as shippers, manufacturers, warehousing entities, distributors, third-party logistics companies, and freight forwarders that are located in the United States, may voluntarily apply to TSA to become certified cargo screening facilities (CCSF). TSA initiated the CCSP at 18 U.S. airports that process high volumes of air cargo, and then expanded the program to all U.S. airports in early 2009. TSA is conducting outreach efforts to air cargo industry stakeholders. Starting in September 2007, TSA began outreach to freight forwarders and subsequently expanded its outreach efforts to shippers and other entities to encourage participation in the CCSP. TSA is focusing its outreach on particular industries, such as producers of perishable foods, pharmaceutical and chemical companies, and funeral homes, which may experience damage to their cargo if it is screened by a freight forwarder or an air carrier. TSA is taking steps to test technologies for screening air cargo. To test select screening technologies among CCSFs, TSA created the Air Cargo Screening Technology Pilot in January 2008, and selected some of the nation’s largest freight forwarders to use these technologies and report on their experiences. In a separate effort, in July 2009, DHS’s Directorate for Science and Technology completed the Air Cargo Explosives Detection Pilot Program that tested the performance of select baggage screening technologies for use in screening air cargo at three U.S. airports. In November 2008, in addition to the canine and physical search screening methods permitted by TSA to screen air cargo, TSA issued to air carriers and CCSFs a list of X-ray, explosives trace detection (ETD), and explosives detection systems (EDS) models that the agency approved for screening air cargo until August 3, 2010. In March 2009, TSA initiated a qualification process to test these and other technologies that it plans to allow air carriers and CCSP participants to use in meeting the screening mandate against TSA technical requirements. TSA expanded its explosives detection canine program. TSA has taken steps to expand the use of TSA-certified explosives detection canine teams. According to TSA, in fiscal year 2009, TSA canine teams screened over 145 million pounds of cargo, which represents a small portion of domestic air cargo. As of February 2010, TSA had 113 dedicated air cargo screening canine teams—operating in 20 major airports—and is in the process of adding 7 additional canine teams. TSA also deployed canine teams to assist the Pacific Northwest cherry industry during its peak harvest season from May through July 2009, to help air carriers and CCSFs handling this perishable commodity to meet the 50 percent screening requirement without disrupting the flow of commerce. TSA established a system to verify that screening is being conducted at the mandated levels. The agency established a system to collect and analyze data from screening entities to verify that requisite levels for domestic cargo are being met. Effective February 2009, TSA adjusted air carrier reporting requirements and added CCSF reporting requirements to include monthly screening reports on the number and weight of shipments screened. TSA faces industry participation, technology, planning, oversight, and other challenges in meeting the air cargo screening mandate as it applies to domestic cargo. Industry Participation. Although TSA is relying on the voluntary participation of industry stakeholders to meet the screening mandate, far fewer shippers and independent CCSFs have joined the program than TSA had targeted. As shown in figure 1, TSA officials have estimated that an ideal mix of screening to achieve the 100 percent mandate as it applies to domestic cargo without impeding the flow of commerce would be about one-third of cargo weight screened by air carriers, one-third by freight forwarders, and one-third by shippers and independent CCSFs. To achieve TSA’s ideal mix of screening by August 2010, shipper and independent CCSF screening efforts would need to increase by over sixteenfold. As shown in figure 1, the total percentage of reported screened cargo rose on average by less than a percentage point per month (from 59 to 68 percent) from February 2009 through March 2010. At these rates, it is questionable whether TSA’s screening system will achieve 100 percent screening of domestic cargo by August 2010 without impeding the flow of commerce. Effective May 1, 2010, TSA requires that 75 percent of air cargo transported on passenger aircraft be screened. However, even if this requirement is met, an additional 25 percent of domestic air cargo would still need to be screened in the 3 months prior to the August 2010 deadline, including some of the most challenging types of cargo to screen, such as unit load device (ULD) pallets and containers. TSA and industry officials reported that several factors, such as lack of economic and regulatory incentives, are contributing to low shipper participation levels. TSA and the domestic passenger air carrier and freight forwarder industry association officials we interviewed stated that many shippers and freight forwarders are not incurring significant screening costs from air carriers. This decreases the financial pressure on the entities to join the CCSP and invest resources into screening cargo, factors that are making TSA’s outreach efforts more challenging. Screening Technology. There is currently no technology approved or qualified by TSA to screen cargo once it is loaded onto a ULD pallet or container—both of which are common means of transporting air cargo on wide-body passenger aircraft. Cargo transported on wide-body passenger aircraft makes up 76 percent of domestic air cargo shipments transported on passenger aircraft. Prior to May 1, 2010, canine screening was the only screening method, other than physical search, approved by TSA to screen such cargo. However, TSA officials still have some concerns about the effectiveness of the canine teams, and effective May 1, 2010, the agency no longer allows canine teams to be used for primary screening of ULD pallets and containers. Canine teams still may be used for secondary screening of ULD pallets and containers; however, secondary screening does not count toward meeting the air cargo screening mandate. In addition, TSA is working to complete qualification testing of air cargo screening technologies; thus, until all stages of qualification testing are concluded, the agency may not have reasonable assurance that the technologies that air carriers and program participants are currently allowed to use to screen air cargo are effective. Qualification tests are designed to verify that a technology system meets the technical requirements specified by TSA. Because of the mandated deadlines, TSA is conducting qualification testing to determine which screening technologies are effective at the same time that air carriers are using these technologies to meet the mandated requirement to screen air cargo transported on passenger aircraft. While we recognize that certain circumstances, such as mandated deadlines, require expedited deployment of technologies, our prior work has shown that programs with immature technologies have experienced significant cost and schedule growth. We reported that these technology challenges suggest the need for TSA to consider a contingency plan to meet the screening mandate without unduly affecting the flow of commerce. Contingency Planning. Although TSA faces industry participation and technology challenges that could impede the CCSP’s success and the agency’s efforts to meet the 100 percent screening mandate by August 2010, the agency has not developed a contingency plan that considers alternatives to address these challenges. Without adequate CCSP participation, industry may not be able to screen enough cargo prior to its arrival at the airport to maintain the flow of commerce while meeting the mandate. Likewise, without technology solutions for screening cargo in a ULD pallet or container, industry may not have the capability to effectively screen 100 percent of air cargo without affecting the flow of commerce. We have previously reported that a comprehensive planning process, including contingency planning, is essential to help an agency meet current and future capacity challenges. Alternatives could include, but are not limited to, mandating CCSP participation for certain members of the air cargo supply chain—instead of relying on their voluntary participation—and requiring the screening of some or all cargo before it is loaded onto ULD pallets and containers. In the report being released today, we recommended that TSA develop a contingency plan for meeting the mandate as it applies to domestic cargo that considers alternatives to address potential CCSP participation shortfalls and screening technology limitations. TSA did not concur with this recommendation and stated that a contingency plan is unnecessary since effective August 1, 2010, 100 percent of domestic cargo transported on passenger aircraft will be required to be screened. The agency also stated that there is no feasible contingency plan that can be implemented by TSA that does not compromise security or create disparities in the availability of screening resources. However, we continue to believe that there are feasible alternatives that TSA should consider to address potential CCSP participation shortfalls and screening technology limitations. Thus, it is prudent that TSA consider developing a contingency plan that would allow for the security and legitimate flow of air cargo. Inspection Resources. While TSA has amended its Regulatory Activities Plan to include inspections of CCSP participants, the agency has not completed its staffing study to determine how many inspectors will be necessary to provide oversight of the additional program participants when the 100 percent screening mandate goes into effect. According to TSA, the agency’s staffing study is continuing through fiscal year 2010 and is therefore not yet available to provide guidance in helping to plan for inspection resources needed to provide oversight. According to our analysis of TSA data, in the next year, inspectors will need to at least double their comprehensive inspections of CCSFs to reach the agency’s inspection goals. We recommended that TSA create milestones to help ensure completion of the staffing study. TSA concurred and stated that as part of the staffing study, the agency is working to develop a model to identify the number of required transportation security inspectors and that this effort would be completed in the fall of 2010. If this model includes an analysis of the resources needed to provide CCSP oversight under various scenarios, it will address the intent of our recommendation. Reported Screening Data. While TSA reported to Congress that industry achieved the February 2009 50 percent screening deadline domestically, questions exist about the reliability of the screening data, which are self- reported by industry representatives, because TSA does not have a mechanism to verify the accuracy of the data reported by the industry. We recommended that TSA develop a mechanism to verify the accuracy of all screening data through random checks or other practical means. TSA stated that verifying the accuracy of domestic screening data will continue to be a challenge because there is no means to cross-reference local screening logs—which include screening information on specific shipments—with screening reports submitted by air carriers to TSA that do not contain such information. However, TSA could consider a quality review mechanism similar to the compliance measurement program used by CBP, which includes regular quality reviews to ensure accuracy in findings and management oversight to validate results. In-Transit Cargo. Cargo that has already been transported on one leg of a passenger flight—known as in-transit cargo—may be subsequently transferred to another passenger flight without undergoing screening. According to TSA officials, though the agency does not have a precise figure, industry estimates suggest that about 30 percent of domestic cargo is transferred from an inbound flight. TSA officials stated that transporting in-transit cargo without screening could pose a vulnerability, but as of February 2010, the agency was not planning to require in-transit cargo transferred from an inbound flight to be physically screened because of the logistical difficulties associated with screening cargo that is transferred from one flight to another. We recommended that TSA develop a plan with milestones for how and when it intends to require the screening of in-transit cargo. TSA concurred with our recommendation and stated that the agency has implemented changes, effective August 1, 2010, that will require 100 percent of in-transit cargo to be screened unless it can otherwise be verified as screened. Because this is a significant change and potentially operationally challenging, it will be important to closely monitor the industry’s understanding and implementation of this requirement to help ensure that 100 percent screening of in-transit cargo is being conducted. TSA has taken steps to increase the percentage of inbound cargo transported on passenger aircraft that is screened, but the agency has not developed a plan, including milestones, for meeting the mandate as it applies to inbound cargo. Consequently, TSA officials have stated that the agency will not be able to meet the mandate as it applies to inbound cargo by the August 2010 deadline. Steps TSA has taken to increase the percentage of inbound air cargo that is screened include the following: Revising its requirements for foreign and U.S. air carrier security programs, effective May 1, 2010, to generally require air carriers to screen a certain percentage of shrink-wrapped and banded inbound cargo and 100 percent of inbound cargo that is not shrink-wrapped or banded. According to TSA, implementation of this requirement will result in the screening of 100 percent of inbound cargo transported on narrow-body aircraft since none of this cargo is shrink-wrapped or banded. Obtaining information from foreign countries on their respective air cargo screening levels and practices to help assess the rigor and quality of foreign screening practices. Working to harmonize security standards with those of foreign nations. According to TSA, screening inbound air cargo poses unique challenges, related, in part, to TSA’s limited ability to regulate foreign entities. As such, TSA officials stated that the agency is focusing its air cargo screening efforts on domestic cargo and on screening elevated-risk inbound cargo as it works to address the challenges it faces in screening 100 percent of inbound cargo. In April 2007, we reported that TSA’s screening exemptions for inbound cargo could pose a risk to the air cargo supply chain and recommended that TSA assess whether these exemptions pose an unacceptable vulnerability and, if necessary, address these vulnerabilities. TSA agreed with our recommendation, but beyond its requirement to screen 100 percent of inbound cargo transported on narrow-body aircraft and a certain percentage of shrink-wrapped or banded inbound cargo, has not reviewed, revised, or eliminated inbound screening exemptions, and did not provide a time frame for doing so. We continue to believe that TSA should assess whether these exemptions pose an unacceptable security risk. In addition, identifying the precise level of screening being conducted on inbound air cargo is difficult because TSA lacks a mechanism to obtain actual data on all screening that is being conducted on inbound air cargo. TSA officials estimate that 55 percent of inbound cargo by weight is currently being screened and that 65 percent of inbound cargo by weight will be screened by August 2010. However, these estimates are based on the current screening requirements of certain countries and are not based on actual data collected from air carriers or other entities, such as foreign governments, on what percentage of cargo is actually being screened. We recommended that TSA develop a mechanism to verify the accuracy of all screening data through random checks or other practical means and obtain actual data on all inbound screening. TSA concurred in part with our recommendation, stating that as of May 1, 2010, the agency issued changes to air carriers’ standard security programs that require air carriers to report inbound cargo screening data to TSA. However, as noted in our report, these requirements apply to air carriers and the screening that they conduct and not to the screening conducted by other entities, such as foreign governments. Thus, TSA will continue to rely in part on estimates to report inbound cargo screening levels. TSA officials stated that it may be challenging to obtain screening data from some foreign governments and other entities that conduct cargo screening, but TSA has not developed a plan for how it could obtain these data. We recognize that it may be challenging for TSA to obtain cargo screening data from foreign governments; however, similar to domestic reporting requirements, the agency could require air carriers to report on cargo screening for all inbound cargo they transport, including the screening conducted by other entities. Moreover, the 9/11 Commission Act requires the establishment of a system to screen 100 percent of cargo transported on passenger aircraft, including inbound cargo. As we have reported in our prior work, a successful project plan—such as a plan that would be used to establish such a system—should consider all phases of the project and clearly state schedules and deadlines. TSA officials reported that the agency is unable to identify a timeline for meeting the mandate for inbound cargo, stating that its efforts are long term, given the extensive work it must conduct with foreign governments and associations. However, interim milestones could help the agency provide reasonable assurance to Congress that it is taking steps to meet the mandate as it applies to inbound cargo. In our June 2010 report, we recommended that TSA develop a plan with milestones for how and when the agency intends to meet the mandate as it applies to inbound cargo. TSA concurred with our recommendation and stated that the agency is drafting milestones as part of a plan that will generally require air carriers to conduct 100 percent screening by a specific date. If implemented effectively, this plan will address the intent of our recommendation. Madam Chairwoman, this concludes my statement. I look forward to answering any questions that you or other members of the subcommittee may have. For questions about this statement, please contact Stephen M. Lord at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony are Steve D. Morris, Assistant Director; Tina Cheng; Barbara A. Guffy; David K. Hooper; Richard B. Hung; Stanley J. Kostyla; Linda S. Miller; Yanina Golburt Samuels; and Rebecca Kuhlmann Taylor. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses air cargo screening. In 2008, about 7.3 billion pounds of cargo was transported on U.S. passenger flights--approximately 58 percent of which was transported domestically (domestic cargo) and 42 percent of which was transported on flights arriving in the United States from a foreign location (inbound cargo). The 2009 Christmas Day plot to detonate an explosive device during an international flight bound for Detroit provided a vivid reminder that terrorists continue to view passenger aircraft as attractive targets. According to the Transportation Security Administration (TSA), the security threat posed by terrorists introducing explosive devices in air cargo shipments is significant, and the risk and likelihood of such an attack directed at passenger aircraft is high. To help enhance the security of air cargo, the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Commission Act) mandated the Department of Homeland Security (DHS) to establish a system to physically screen 50 percent of cargo on passenger aircraft--including the domestic and inbound flights of foreign and U.S. passenger operations--by February 2009, and 100 percent of such cargo by August 2010. The 9/11 Commission Act defines screening for purposes of the air cargo screening mandate as a physical examination or nonintrusive methods of assessing whether cargo poses a threat to transportation security. The act also requires that such a system provide a level of security commensurate with the level of security for the screening of checked baggage. According to TSA, the mission of its air cargo security program is to secure the air cargo transportation system while not unduly impeding the flow of commerce. Although the mandate is applicable to both domestic and inbound cargo, TSA stated that it must address the mandate for domestic and inbound cargo through separate systems because of differences in its authority to regulate domestic and international air cargo industry stakeholders. This testimony is based on a report we are publicly releasing today that assesses TSA's progress and related challenges in meeting the air cargo screening mandate. It addresses the following key issues in our report: progress TSA has made in meeting the 9/11 Commission Act screening mandate as it applies to (1) domestic air cargo and (2) inbound air cargo and related challenges it faces for each. For our report, we reviewed documents such as TSA's air cargo security policies and procedures. We also conducted site visits to four category X U.S. commercial airports and one category I U.S. commercial airport that process domestic and inbound air cargo. We selected these airports based on airport size, passenger and air cargo volumes, location, and participation in TSA's screening program. At these airports, we observed screening operations and technologies and interviewed local TSA officials, airport management officials, and representatives from 7 air carriers, 24 freight forwarders, 3 shippers, and 2 handling agents to obtain their views on TSA's system to implement the screening mandate. We selected these air carriers, freight forwarders, shippers, and handling agents based on input from TSA and industry stakeholders. More detailed information about our scope and methodology is included in our June 2010 report. We conducted this work in accordance with generally accepted government auditing standards. TSA has taken a number of actions to meet the screening mandate as it applies to domestic cargo, including creating a voluntary program to allow screening to take place at various points in the air cargo supply chain and mandating that, effective May 1, 2010, 75 percent of all cargo transported on passenger aircraft is screened. However, TSA faces several challenges in developing and implementing a system to screen 100 percent of domestic air cargo, and it is questionable, based on reported screening rates, whether 100 percent of such cargo will be screened by August 2010 without impeding the flow of commerce. Moreover, TSA has made some progress in meeting the screening mandate as it applies to inbound cargo, but challenges exist, in part related to TSA's limited ability to regulate foreign entities. TSA does not expect to achieve 100 percent screening of inbound air cargo by the mandated August 2010 deadline. We made five recommendations to TSA to address these challenges. TSA concurred with three of these recommendations, partially concurred with one, and did not concur with the remaining recommendation, which we discuss in more detail later in this statement. TSA has made progress in meeting the 9/11 Commission Act air cargo screening mandate as it applies to domestic cargo, and has taken several key steps in this effort, such as increasing the amount of domestic cargo subject to screening, creating a voluntary program to allow screening to take place at various points along the air cargo supply chain, and taking steps to test air cargo screening technologies, among other actions. However, TSA faces several challenges in fully developing and implementing a system to screen 100 percent of domestic air cargo, including those related to industry participation and technology.
VA’s mission is to serve America’s veterans and their families and to be their principal advocate in ensuring that they receive medical care, benefits, and social support in recognition of their service to our nation. VA, headquartered in Washington, D.C., is the second largest federal department and reported it had over 230,000 employees as of September 30, 2007, including physicians, nurses, counselors, statisticians, computer specialists, architects, and attorneys. VA has three major line organizations—VHA, VBA, and NCA—and field facilities throughout the United States. VHA has 21 Veterans Integrated Service Networks (VISN) that oversee medical center activities within their areas, which may cover one or more states. VA provides employees, contractors, volunteers, and students with a wide range of IT equipment, including desktop and laptop computers, monitors and printers, personal digital assistants, unit-level workstations, local area networking equipment, and medical equipment with memory and data processing/communication capabilities. By the start of fiscal year 2008, VA had centralized its IT function at all locations within the realigned OIT. The Assistant Secretary for Information and Technology heads VA’s OIT, serves as the CIO for the department, and is the principal advisor to the Secretary on matters relating to IT management in the department. OIT staff share responsibility for management of IT equipment inventory with property management personnel. Accordingly, it is crucial for the department’s CIO to have the cooperation of property managers to ensure that well-established integrated processes exist for controlling IT inventory. The steps in the IT property management process are key events, which should be documented by an inventory transaction, a financial transaction, or both, as appropriate. Federal records management law, as codified in Title 44 of the U.S. Code and implemented through National Archives and Records Administration (NARA) guidance, requires federal agencies to adequately document and maintain proper records of essential transactions and have effective controls for creating, maintaining, and using records of these transactions. Table 1 provides an overview of VA’s IT property management process. VA has made significant progress in addressing our previous recommendations directed at improving policies and procedures for control of IT equipment and reducing the risk of disclosure of sensitive personal and medical information. As of the end of our field work in July 2008, VA had completed action on 10 of our 12 recommendations from our July 2007 report. VA’s Assistant Secretary for Management and the CIO worked together to draft a revised property management policy in a new VA Handbook 7002, Logistics Management Procedures, which addresses 7 of our 2007 recommendations. This revised policy is an important step in establishing a framework for control of IT equipment. On July 3, 2008, the Assistant Secretary for Management mandated early implementation of this policy, which includes requirements for user-level accountability, time frames for completing Reports of Survey on missing and stolen property, and requirements for strengthening physical security. VA also partially implemented action on one other recommendation and has actions under way to address the remaining recommendation from our 2007 report. Successful implementation of these efforts will be key to improving controls over VA’s IT equipment. VA also made progress implementing recommendations from our 2004 report related to personal property and equipment management. VA completed action on four of six property- related recommendations in our 2004 report and partially completed action on a fifth recommendation. VA has plans to address the remaining 2004 recommendation. In addition, in response to your concerns about VA- wide controls based on our previous audits, VA required departmentwide physical inventories of IT equipment to be completed by December 31, 2007. OIT monitored the 2007 physical inventory effort for IT equipment and reported that as of May 15, 2008, VA was unable to locate approximately 62,800 recorded IT equipment items, of which over 9,800 could have stored sensitive information. The CIO formed a “tiger team” to monitor efforts under the Report of Survey system and to help ensure that Reports of Survey are completed in a timely manner. To address recommendations in our July 2007 report, VA completed action on 10 of our 12 recommendations, partially implemented actions on one other recommendation, and has actions under way to address the remaining recommendation. VA actions on our 2007 report recommendations included the establishment of specific time frames for finalizing Reports of Survey, granting OIT personnel access to the central property database, and holding employees financially liable for lost IT equipment. In addition, VA completed action on four of the six recommendations in our July 2004 report, partially completed action on a fifth recommendation, and has plans to address the remaining recommendation. For example, VA revised its policy through VA Handbook 7127/4, Materiel Management Procedures, to state that sensitive items include IT equipment and named several types of IT equipment items. VA’s revised policy also stated that IT equipment items valued under $5,000 are to be included in physical inventories. Further, VA has drafted policies that provide a framework for strengthening controls over IT equipment, including VA Handbook 7002, Logistics Management Procedures. On July 3, 2008, VA’s Assistant Secretary for Management mandated early implementation of this handbook. Effective implementation of this new policy will be essential to ensuring adequate control and accountability of VA’s IT equipment and any sensitive information residing on that equipment. Table 2 provides a summary of our 2007 and 2004 recommendations and the current status of VA actions. For a more detailed explanation of VA’s actions taken and planned on our recommendations, see appendix II. VA’s 2007 departmentwide inventory initially identified approximately 79,000 missing IT equipment items, underscoring the need to effectively implement the new policies and procedures mandated on July 3, 2008. In the 6 months following completion of the physical inventory, VA facilities undertook efforts to locate or determine reasons for missing items. VA was able to locate several thousand of the missing equipment items. However, as summarized in table 3, on May 15, 2008, OIT reported that VA was unable to locate approximately 62,800 recorded IT equipment items, of which over 9,800 could have stored sensitive information. Because VA does not know what, if any, sensitive information resided on the equipment and when the equipment was lost, notifications to potentially affected individuals could not be made in accordance with OMB guidance. We interviewed VA officials and obtained documentation on the VA-wide inventory; however, we did not validate the results. According to VA, many of the missing items were old equipment and may have been disposed of through VA’s excess property program. However, because VA facilities had not always documented IT equipment disposal for many years, there was no way to determine whether any of the missing items were lost or stolen. Further, during our work, we discovered that not all IT equipment items were included in the departmentwide inventory. Consequently, the numbers of missing items could be higher. For example, VA’s 2007 physical inventory did not include medical equipment with data storage or processing capabilities. In addition, IT equipment items not accounted for in the OIT equipment inventory listing (EIL) were not subject to the 2007 physical inventory at some VA facilities. Further, limited completeness tests we performed as part of our data reliability procedures at case study locations identified some IT equipment items recorded to EILs for organizations other than OIT. Prior to the establishment of OIT, EILs were aligned organizationally and some IT equipment assigned to other EILs had not yet been reassigned to the OIT EIL and, therefore, were omitted from the 2007 physical inventory. We discussed our finding with OIT officials, and they told us that they had met in June 2008 to develop strategies for moving all IT equipment items assigned to other EILs to the OIT EIL. In compliance with VA Handbook 7125, General Procedures, VA personnel submitted Reports of Survey for IT equipment items that were not located during the departmentwide physical inventory and subsequent follow-up investigation. A CIO tiger team was responsible for monitoring the Report of Survey process and helping to ensure that it was completed in a timely manner. Local Boards of Survey were responsible for investigating missing items and approving write-offs of IT equipment items that could not be located during the departmentwide physical inventory. However, as of May 15, 2008, VA had over 43,000 items that were listed on open Reports of Survey and facility personnel were continuing to search for missing items. The 2007 physical inventories were a massive undertaking and required significant effort over several months to resolve discrepancies. Although we would have expected the VA locations that we previously tested to have few, if any, missing items, as of May 15, 2008, 6 of the 12 locations reported from 1,269 to 6,427 missing IT items; 4 locations had from 115 to 863 missing IT items; and only 2 locations had fewer than 100 missing items. A summary of Reports of Survey data on missing IT equipment and the reported original acquisition cost identified in VA’s 2007 physical inventory related to sites we tested in our 2004, 2007, and 2008 audits are presented in appendix IV. Our tests of IT equipment inventory controls at four case study locations, including three VA HCS and VA headquarters, identified continuing control weaknesses related to missing items, lack of accountability, and errors in IT equipment inventory records. VA’s 2007 departmentwide physical inventory effort was intended to establish a reliable IT equipment inventory baseline going forward. Accordingly, our tests excluded from the population of IT equipment thousands of items identified as missing during VA’s 2007 IT physical inventory effort. Given the new baseline, if adequate controls had been in place by the end of this inventory process, we would not have expected to identify missing items, blank data fields, or inaccurate inventory records at our test locations. As previously noted, in July 2008 VA mandated early implementation of revised policy related to control of IT equipment. Although the early implementation of July 2008 policy changes may address IT equipment control weaknesses, this policy was not in effect at the time of our tests. Our Standards for Internal Control in the Federal Government states that a positive control environment provides discipline and structure as well as the climate that influences the quality of internal control. Further, these standards require agencies to establish physical control to secure and safeguard vulnerable assets, such as equipment that might be vulnerable to risk of loss or unauthorized use, including periodically counting the assets and comparing the results to control records. However, our tests of IT equipment inventory controls at the four case study locations, including three VA HCS and VA headquarters, identified continuing problems with (1) inventory control and accountability, (2) control over computer hard drives in the excess property disposal process, and (3) physical security of IT equipment storage locations. For example, our statistical tests at the four locations from February through May of 2008 identified significant numbers of missing items, several of which could have stored sensitive personal and medical information. Overall, our statistical tests and data analysis at the four locations found significant failures related to IT inventory control and accountability including (1) missing items, (2) blank serial numbers, (3) inaccurate information on user organization, (4) inaccurate information on user location, and (5) other recordkeeping errors. We also identified weaknesses in the controls over computer hard drives in the property disposal process at the four test locations, involving (1) lack of timely sanitization and disposal, (2) inadequate recordkeeping, and (3) physical security. In addition, we found physical security weaknesses at IT storage facilities at all four locations. These weaknesses increase the risk that sensitive personal and medical information could be compromised. Our 2008 statistical tests of key IT equipment inventory controls and data analysis found significant inventory control failures related to (1) missing items, (2) blank serial numbers, (3) inaccurate information on user organization, (4) inaccurate information on user location, and (5) other recordkeeping errors. As noted previously, VA performed a 2007 physical inventory of IT equipment. We excluded from our populations the missing items identified during VA’s physical inventory at the four case study locations. Table 4 shows the 2007 VA-wide inventory results related to missing items at our four case study locations. Given our exclusions of missing items from the VA inventories, if adequate controls had been in place by the end of this inventory process, we would not have expected to identify missing items, blank data fields, or inaccurate inventory records at our test locations. Table 5 shows the results of our statistical tests at the four case study locations. We present our results as point estimates of control failure rates. Each point estimate has a margin of error, based on a two-sided, 95 percent confidence interval, of plus or minus 10 percent or less. Serial number control is essential to accountability for sensitive items, such as IT equipment, because it identifies unique items. The property bar code label alone is not a sufficient identifier for sensitive items because these labels are removable and can be replaced, if lost or damaged. In addition, because VA has not yet put in place a control for user-level accountability, accurate information on user organization and user location is key to maintaining accountability for IT equipment items. Further, recordkeeping errors impair the reliability of IT inventory information for management decision making. For example, inaccurate inventory records on item name, model number, and manufacturer impair asset visibility and affect decision making on timing of IT equipment upgrades. As discussed previously, limited completeness testing performed as part of our data reliability procedures identified IT equipment that was not included in the populations of recorded IT equipment used for our control tests. For example, our completeness tests at two of the four locations we tested identified three IT equipment items that were recorded to EILs for Psychology, Radiology, and Acquisition and Material Management rather than the OIT EIL. Our completeness tests also identified one item not recorded to an EIL. VA officials could not tell us the quantity of IT equipment items that were not included in the four case study IT equipment populations from which we selected our samples for testing. Our tests of physical inventory controls from February through May of 2008 identified 50 missing IT equipment items, including 9 medical equipment items. Of the 50 missing items, 34 items could have stored sensitive personal and medical information. Because VA does not know what, if any, sensitive information resided on the equipment, notifications to potentially affected individuals could not be made. Following the recent completion of VA inventories of IT equipment and adjustment of inventory records at the four test locations, we would not have expected to identify any additional missing items. The continuing occurrences of missing items indicate that underlying control weaknesses have not yet been corrected. Lost and missing IT equipment pose both a financial risk as well as a security risk associated with sensitive information maintained on computer hard drives. The scope of our IT equipment inventory tests was broader than VA’s IT inventory because we included medical items with data storage capability. Medical equipment with data storage capability is not currently included in VA’s definition of IT equipment. VA CIO officials told us they are aware of the need to control medical equipment with data storage capability and plan to address control of IT components of this equipment. The following discussion summarizes the results of our inventory control tests at the four case study locations. North Texas HCS. As noted in table 5, our physical inventory testing of the North Texas HCS—which covered the Dallas VA Medical Center and Fort Worth Outpatient Clinic components—found high control failure rates for all of our inventory control tests. Our existence test identified seven missing items, including two that had the capability to store sensitive information. One of the missing items was a piece of medical equipment. As noted in table 5, we estimated a 6 percent failure rate related to the missing items in the recorded population of 12,172 IT equipment items from which we selected our sample. In addition, our analysis of the population of recorded IT equipment found that 7,164, or about 59 percent, did not have their serial numbers recorded in the physical inventory records. Serial numbers are essential to proper identification of sensitive computer equipment. Boston HCS. Our physical inventory testing of the Boston HCS—which covered the Brockton, Jamaica Plain, and West Roxbury Campuses— identified 10 missing items, including 7 that had the capability to store sensitive information. The 7 missing items included four medical analyzers, two microcomputers, and a radiology equipment item. As noted in table 5, we estimated a 3 percent failure rate related to the missing items in the recorded population of 15,706 IT equipment items from which we selected our sample. Puget Sound HCS. The Puget Sound HCS had an estimated failure rate of 1 percent related to missing items in the recorded population of 11,474 IT equipment items, allowing us to conclude that the HCS’s controls over existence of IT equipment inventory are effective. Further, the one item we determined to be missing related to a computer monitor which did not have the capability to store data. However, the Puget Sound HCS had high failure rates for the user information and recordkeeping tests. VA Headquarters Organizations. Our physical inventory testing of VA headquarters organizations IT equipment items identified an estimated failure rate of 12 percent related to missing items in the recorded population of 34,735 items. Our population included strata for VHA, VBA, OIT, Acquisition and Materiel Management, General Counsel, Policy and Planning, and a seventh strata with all other headquarters organizations. Table 6 identifies missing IT equipment items in our stratified sample by VA headquarters organization. As was the case with our 2007 audit of VA IT equipment inventory controls, we found a lack of user-level accountability at the four case study locations in our current tests. As shown in table 7, VA has not yet assured accurate IT inventory records with regard to user organization and location. Information on organization and location are key to maintaining visibility and accountability for IT equipment items. VA property management policy and VA Handbook 7002 include guidelines for holding employees and supervisors pecuniarily (financially) liable for loss, damage, or destruction because of negligence or misuse of government property. Several VA facilities have provided us with current examples where VA employees have been held liable for lost and missing IT equipment. Since the completion of our tests, VA has mandated early implementation of Handbook 7002 which also requires assignment of user- level accountability for most IT equipment items. To be effective, the new policy will need to be adequately implemented and enforced. The following discussion summarizes the results of our tests for user-level accountability. North Texas HCS. The North Texas HCS components we tested had very high failure rates related to accountability—an estimated 91 percent for correct user organization and an estimated 46 percent for correct user location. North Texas HCS staff provided us with evidence of sign-out sheets and hand receipts for some IT equipment items such as pagers, cellular telephones, and personal digital assistants. However, for a majority of IT equipment items, the North Texas HCS did not assign user- level accountability through hand receipts or record user information in the inventory system. For medical IT equipment items, the inventory system included user organizations (e.g., radiology or anesthesiology), but did not assign the items to unit heads. Boston HCS. The Boston HCS campuses we tested also had high failure rates related to accountability—an estimated 60 percent for correct user organization and an estimated 17 percent for correct user location. At our exit briefing in May 2008, Boston HCS staff reported that they are working with engineering personnel to better identify physical locations to aid in the tracking of mobile IT equipment items. For traditional IT equipment items, the Boston HCS generally did not record user organization in its IT equipment inventory records. Further, the Boston HCS generally did not assign user-level accountability through recorded user information or hand-receipts with the exception of pagers, cell phones, and laptops that have been assigned to specific users. For medical IT equipment items, the inventory system included user organizations (e.g., radiology or anesthesiology). However, the inventory records for some of the equipment listed the user as “Medical” or “Nursing” and did not specify what unit in the hospital was accountable for the equipment. Puget Sound HCS. The Puget Sound HCS components we tested also had high failure rates related to accountability—an estimated 76 percent for correct user organization and an estimated 14 percent for correct user location. The Puget Sound HCS staff provided us with evidence of a locally developed supplemental application for AEMS/MERS, known as the Equipment Loan Form (ELF). Puget Sound HCS staff use the ELF to record user-level information for mobile IT equipment items (e.g., laptop computers) or IT equipment items taken off-site (e.g., a desktop computer at an employee’s home). However, for traditional IT equipment items (e.g., desktop computers, printers, and monitors at HCS facilities), the HCS did not assign user-level accountability with recorded user information or hand-receipts. For traditional IT equipment items, the inventory records generally did not identify the user organizations. For medical IT equipment items, the inventory system included user organizations (e.g., radiology or anesthesiology), but did not assign accountability for shared items to unit heads. VA Headquarters Organizations. Our statistical tests for accurate user organization information identified an estimated 12 percent error rate for VA headquarters organizations. In addition, our statistical tests for correct user information identified an estimated 52 percent error rate. Out tests included IT equipment coordinators—who are responsible for control of equipment shared by multiple users—and individual user employees. In situations where equipment, such as a printer, was shared by multiple employees, we based our tests on whether the inventory records correctly listed the equipment coordinator. In other situations where equipment was in possession and use by an individual employee, we tested to see if that employee was listed in the property record. Overall, we found 147 errors out of a sample of 349 records tested. Regarding user location, our statistical tests found an estimated 33 percent error rate related to situations where inventory records were not updated to reflect the transfer or relocation of IT equipment. We also identified inconsistencies in the use of hand receipts for assigning user-level accountability of mobile IT equipment that can be removed from VA offices for use by employees who are on travel or are working at home. For example, we requested hand receipts for 38 mobile IT equipment items in our statistical sample that were being used by VA headquarters employees. These items either could be or were taken off-site. We received 20 hand receipts—4 that were dated after the date of our request and 16 that were valid. We did not receive hand receipts for the other 18 devices. As shown in table 8, we found some problems with the accuracy of IT equipment inventory records, ranging from an estimated 4 percent at VA headquarters to an estimated 41 percent at the Boston HCS. Recordkeeping errors included inaccurate information on the status (in use, turned-in, disposal), serial numbers, and item descriptions. Although the estimated overall failure rates for these tests were lower than the failure rates for the other control attributes we tested, they were significant for the various recordkeeping attributes we tested at the four locations. Accurate IT equipment inventory records are important to management decision making because these records are used to determine the types, quantities, and age of equipment as well as life cycle and replacement time frames. Inaccurate information on the status of items—in service, sent for repair, turned in for disposal—masks visibility of items that are not available for use and may need to be replaced. Serial number errors, such as typographical errors, can impair accountability. Further, inaccurate inventory information can cause significant waste and inefficiency during physical inventories because it may require additional time to locate and verify the status of the items. Our review of the data submissions from all four test locations we visited identified data consistency and standardization issues with recorded names, models, and manufacturers of IT equipment. As a result, management at facilities we tested could not tell how many items of a certain model they had in service. Because property system data fields for item description are free-form and do not provide for data standardization, accurate data entry is critical to the identification of like items. For example, North Texas HCS inventory data showed one Solar 8000 physiological monitor listed as model “soalr 8000,” one listed as “Solar 800,” 26 listed as “Solar 8000,” and 70 listed as “Solar8000.” Although some of these differences appear to be typographical errors, when searching for Solar 8000 equipment in the database, there is no assurance that other variations of the item name would appear in the search results. Further, this situation hindered the North Texas HCS staff’s identification of medical IT equipment items that store or process patient data, requiring us to select a second sample and make an additional site visit. At the Boston HCS, we found that Samsung monitor model number 150N was referred to inconsistently as a “Monitor” 4 times, “Neoware” 3 times, “Samsung 15 Inch” 33 times, and a “Samsung Monitor” 58 times. VA’s policy does not address data consistency and standardization. Our Internal Control Management and Evaluation Tool states that an agency should establish a variety of control activities suited to information processing systems to ensure accuracy and completeness, consider whether edit checks are used in controlling data entry, and consider accuracy control in relation to data entry design features. Although this tool is not required to be used, it is intended to provide a systematic, organized, and structured approach for federal agency use in assessing internal control structure. The failure to maintain consistent information on identical items or classes of items impairs visibility over IT assets as well as analysis and management decision making on existing IT equipment and replacements. Although VA requires that hard drives of IT equipment and medical equipment be sanitized prior to disposal to prevent unauthorized release of sensitive personal and medical information, we found weaknesses in the disposal process at each of our test locations that pose a risk that sensitive personal and medical information could be compromised. Specifically, we found weaknesses related to (1) timeliness of data sanitization, (2) adequacy of inventory recordkeeping for hard drives removed from their host computers, and (3) physical security controls. Currently, VA OIT personnel are not cleansing all hard drives in the property disposal process because of the guidance from VA’s Office of General Counsel to preserve electronic information relevant to a class action lawsuit filed against VA in 2007 (the litigation hold), which heightens the need to maintain control over the hard drives in the property disposal process. However, two case study locations had not performed timely sanitization and disposal of hard drives prior to the effective date of the litigation hold. Specifically, one of our HCS test locations had stored excess hard drives for 3 to 4 years and another HCS test location indicated some of its excess hard drives dated back to the 1980s. Two HCS locations did not record dates that all hard drives were received. VA headquarters organizations did not keep records on hard drives in the disposal process prior to February 2008. In addition, adequate control over computer hard drives in the property disposal process requires accurate and complete recordkeeping, such as recording the hard drive serial number along with property identification and serial numbers of the original host computer. The ability to identify hard drives with the host computer inventory records also provides a means to determine the type of data that may have been stored on the hard drives. However, two of our four test locations did not record sufficient information to identify hard drives with host computers, and VA did not have a standard procedure to address this issue. Moreover, although storage locations used for excess hard drives are subject to access controls in VA Handbook 0730/1, Security and Law Enforcement, including motion detection intrusion alarm systems and special key (access) controls, three of our four case study locations did not comply with these requirements. Weaknesses in the controls over hard drives in the property disposal process create an unnecessary risk that sensitive personal information protected under the Privacy Act of 1974 and health information accorded additional protections under the Health Insurance Portability and Accountability Act of 1996 (HIPAA) could be compromised. The following discussion summarizes our findings at the four case study locations. North Texas HCS. We found that the North Texas HCS had weaknesses in controls over hard drives in the property disposal process related to timely sanitization, inadequate recordkeeping, and lack of access controls. According to North Texas HCS staff, they were not sanitizing data from any hard drives in the property disposal process at the time of our site visit because of the litigation hold related to the class action lawsuit. The North Texas HCS also indicated that not all hard drives received for sanitization and disposal had been logged in their tracking system. However, for those drives that were recorded, we found that the hard drive disposal records contained sufficient information for identifying hard drives with their original host computers. In addition, the disposal records contained the dates on which the hard drives were removed from their original host computers. The North Texas HCS also maintained a file on certifications of drives that had been cleansed. Further, we observed that one of the two storage locations storing hard drives had inadequate physical security because of the absence of an access control system and intrusion detection alarm system, as required by VA Handbook 0730/1. Boston HCS. Our work identified recordkeeping weaknesses in the hard drive disposal process at the Boston HCS. Specifically, we found that the hard drive disposal records did not contain sufficient information for identifying hard drives with their original host computers. Further, these records did not indicate the dates on which OIT personnel removed hard drives from their original host computers, which would impede an assessment of timely sanitization or disposal. The Boston HCS also had a practice of storing used hard drives in unsecured locations, such as closets and cabinets, and indicated that it had hard drives dating back to the 1980’s. The Boston HCS Information Security Officer is in the process of establishing a centralized storage facility for computer hard drives. Puget Sound HCS. We identified control weaknesses in the hard drive disposal process at the Puget Sound HCS related to a lack of timely sanitization and disposal and inadequate recordkeeping. Although Puget Sound HCS officials are holding drives because of the litigation hold related to the class action lawsuit, they told us that approximately 100 of the hard drives we observed had been in storage for approximately 3 or 4 years, and therefore are not related to the litigation hold. In addition, the hard drive disposal records at the Puget Sound HCS did not contain sufficient information for identifying hard drives with their original host computers. After our site visit, Puget Sound HCS staff provided us with revised hard drive records that include property identification numbers and hard drive serial numbers and identify hard drives with their original host computers. The Puget Sound HCS stored hard drives in a location that was in full compliance with physical security requirements in VA Handbook 0730/1. VA Headquarters Organizations. Weaknesses we identified in controls involved the lack of recordkeeping prior to February 2008 and the lack of access controls of hard drive storage facilities. We found that the current hard drive disposal records at VA headquarters contain sufficient information for identifying hard drives with their original host computers. Specifically, OIT records hard drive information in a log that requires, among other elements, the bar code and serial numbers of the original host computer from which OIT personnel removed the hard drive and the serial number of the hard drive. OIT also records the dates on which hard drives are removed from original host computers. However, according to OIT officials and our review of the hard drive records, VA headquarters did not maintain a central record of hard drives prior to February 2008. Further, one of the two hard drive storage locations that we observed at VA headquarters had inadequate physical security because of the absence of an access control system and intrusion detection alarm system, as required by VA Handbook 0730/1. VA Handbook 0730/1, Security and Law Enforcement, prescribes physical security requirements for storage of new and used IT equipment. Specifically, the handbook requires warehouse-type storerooms to have walls to ceiling height with either masonry or gypsum wall board reaching the underside of the slab (floor) above. OIT storerooms are required to have overhead barricades that prevent “up and over” access from adjacent rooms. Warehouse, OIT, and medical equipment storerooms are all required to have motion intrusion detection alarm systems that detect entry and broadcast an alarm of sufficient volume to cause an illegal entrant to abandon a burglary attempt. Finally, OIT storerooms also are required to have special key control, meaning room door lock keys and day lock combinations that are not master keyed for use by others. Our investigator’s inspection of physical security at officially designated IT warehouses and storerooms that held new and used IT equipment at the four case study locations found that most of these storage facilities met the requirements in VA Handbook 0730/1. However, we identified some deficiencies. For example, our investigator found at least one room at all four case study locations that did not have an electronic access control system or an intrusion detection system. Designated IT equipment storage locations at the Seattle Division of the Puget Sound HCS met the physical security requirements in VA Handbook 0730/1. However, IT workrooms and other informal, undesignated storage facilities did not. Despite the established physical security requirements, we found numerous informal, undesignated IT equipment storage locations that did not meet VA physical security requirements. For example, we observed an excess property storage room at the North Texas HCS that contained boxes of 86 hard drives that needed to be disposed of or sanitized. This room lacked a motion detection alarm system and the type of locking system prescribed in VA policy. North Texas HCS staff believed this room was not subject to the security provisions of VA Handbook 0730/1 because it was not formally designated as a storeroom or warehouse. Our investigator also identified an IT equipment work room at the North Texas HCS that lacked adequate physical security measures and was considered temporary in nature. In addition, at the Boston HCS, our investigator found that security personnel were unaware of several temporary storage rooms that contained IT equipment. Some of these rooms were initially established by OIT personnel as temporary storage areas, but had been in use for several years. Because these storerooms had not been formally designated as IT storage facilities, they were not subjected to required physical security inspections. Weaknesses in physical security heighten the risk that sensitive information contained on IT equipment stored in unsecured warehouses and storerooms could be compromised. Our audits and VA’s departmentwide physical inventory of IT equipment identified pervasive control weaknesses that resulted in tens of thousands of missing IT equipment items that were purchased with taxpayer dollars. About 9,800 of these items have data storage capabilities and therefore pose a risk of improper disclosure of veterans’ personal and medical information. Further, VA’s lack of user-level accountability and its failure to maintain accurate and complete IT inventory records have hindered efforts to locate missing items. In the past year, VA has made significant progress in implementing its realigned OIT organization and strengthening policies for control over IT equipment. However, ensuring that IT inventory records are complete and that they are updated as changes in status occur will be key to maintaining accuracy and accountability over IT equipment items. VA’s continued efforts to establish and maintain control over IT assets will be essential if VA is to adequately safeguard those assets from theft, loss, and misappropriation and protect sensitive personal and medical information of the nation’s veterans. We recommend that the Secretary of Veterans Affairs require the CIO, with the support of medical centers and VA headquarters organizations we tested and other VA organizations, as appropriate, to take the following five actions to improve accountability of IT equipment inventory and reduce the risk of disclosure or compromise of sensitive personal and medical information: Review property inventory records and confirm that all IT equipment, regardless of the organizational equipment inventory listing, is identified in the property system. Establish and implement a policy requiring development of standardized naming classifications for IT equipment—including item name, manufacturer, and model—for recording IT equipment into local property inventory systems. Develop a list of medical equipment with data storage capability that should be considered as IT equipment for inventory control purposes. Develop a procedure for identifying hard drive serial numbers with both the property identification numbers and serial numbers of host computers. Revise the definition of IT storage locations in VA’s Handbook 0730/1, Security and Law Enforcement, to include informal IT storage locations, such as OIT work rooms, and require these locations to be included in physical security inspections. In its July 28, 2008, written comments on our report, which are reprinted in appendix III, VA generally agreed with four of our five recommendations. VA initially disagreed with our recommendation concerning inventory control over medical equipment because it interpreted our recommendation as requiring them to redefine (i.e., reclassify) medical equipment with data storage capability as IT equipment. Instead, our recommendation was directed at developing a list of medical equipment with data storage capability and including this equipment in physical inventories of IT equipment to provide for CIO oversight of these items. We followed up with VA officials to clarify the intent of our recommendation. We also made appropriate changes to our report to clarify the intent of our recommendation. In addition, while agreeing with the intent of our recommendation concerning the development of standard naming classifications for its IT equipment, VA initially commented that it differed with part of our recommendation concerning who should be responsible for the development of standardized naming classifications. However, VA’s comments indicate that it interpreted this recommendation as requiring classification action to occur on a decentralized basis at each VA facility. This was not our intent. In follow-up discussions with VA officials, we explained that our recommendation was directed at taking action to establish VA-wide naming conventions that would be used by all VA facilities in recording property information in their local inventory systems. We clarified the wording in our recommendation accordingly. Based on our follow-up meeting, VA officials said they agreed with all five of our recommendations. They reiterated actions noted in VA’s comment letter on steps taken as well as planned actions to improve the accuracy and consistency of information in VA’s property inventory systems. We are sending copies of this report to interested congressional committees; the Secretary of Veterans Affairs; the Veterans Affairs Chief Information Officer; the Under Secretary of Health, Veterans Health Administration; and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-9095 or dalykl@gao.gov, if you of your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix V. Given the continuing nature of information technology (IT) equipment inventory control problems and their significance, the Chairman and Ranking Member of the House Committee on Veterans’ Affairs, Subcommittee on Oversight and Investigations asked us to perform additional follow-up work to determine (1) whether the Department of Veterans Affairs (VA) has made progress in implementing our prior recommendations for improving internal control over IT equipment and (2) the effectiveness of VA’s current internal controls to prevent theft, loss, or misappropriation of IT equipment. We evaluated VA’s progress in implementing our previously reported recommendations by reviewing agency documentation and interviewing property management and Office of Information Technology (OIT) officials on actions taken in response to recommendations in our 2007 and 2004 reports. In concert with the Subcommittee request that VA perform a departmentwide physical inventory of IT assets, we reviewed the results of VA’s 2007 physical inventory of IT equipment items and VA’s process for completing Reports of Survey on lost and stolen items. We also evaluated policies that include guidance for improving accountability of IT equipment and accuracy of inventory records, related memorandums, and other documentation, such as action summaries. In addition, we interviewed cognizant VA officials about specific actions under way or completed, the component organizations responsible for those actions, and the status and targeted completion dates of those actions. Our assessment of the effectiveness of current VA IT equipment inventory controls included statistical tests of key control attributes at four case study locations, including the health care systems (HCS) in North Texas, Boston, and Puget Sound, and VA headquarters organizations. We also assessed controls over hard drives in the excess property disposal process, and our investigators made physical security inspections of IT storage locations at our four case study locations. We used as our criteria applicable law and VA policy, as well as our Standards for Internal Control in the Federal Government and our Internal Control Management and Evaluation Tool. We reviewed applicable program guidance provided by the test locations and interviewed officials about their IT inventory processes and controls. In selecting our case study locations, we chose three geographically disparate VA HCS. We also tested inventory at VA headquarters organizations as a means of assessing the overall control environment, or “tone at the top,” as we did in our 2007 audit. Table 9 shows the VA locations selected for IT equipment inventory control testing, the sample size, and the reported number and value of IT equipment items at each location. We performed appropriate data reliability procedures, including an assessment of each VA test location’s procedures for assuring data reliability, reasonableness checks on electronic data, and tests to assure that IT equipment inventory was sufficiently complete for the purposes of our work. As in our 2007 work, we relied on biomedical engineers to provide lists of medical equipment with the ability to store or process electronic data. We performed analytical procedures to confirm reasonableness of the medical equipment listings provided by the three HCS. Our analysis determined that the original listing submitted by the North Texas HCS staff was incomplete regarding medical equipment meeting our definition as IT equipment. We revisited our criteria for identifying medical equipment with data storage and processing capability with North Texas HCS officials and asked them to provide us a new medical equipment listing to support our sampling and control tests. Our procedures and test work also identified a limitation related to the completeness of IT equipment inventory at our four test locations. The VA North Texas and Boston HCS maintained some IT equipment records outside of their central listings of IT equipment. We also identified evidence that the VA Puget Sound and VA headquarters did not record all IT equipment items in the official property records. Further, our statistical tests determined that some IT equipment was recorded in inventory categories other than IT. We disclosed this limitation in the discussion of our test results. As a result of these limitations, the population of IT equipment is not known for VA overall or by location and we were not able to project our test results to the population of IT equipment inventory at each of our four test locations. However, we determined that these data were sufficiently reliable for us to project our test results to the population of current, recorded IT equipment inventory at each of the four locations. From the population of current, recorded IT equipment inventory at the time of our tests, we selected stratified random probability samples of IT equipment, including medical equipment with data storage capability, at each of the three HCS locations. For the 19 VA headquarters organizations, we stratified our sample by 6 major offices and used a seventh stratum for the remaining 13 organizations. With these statistically valid samples, each item in the population for the four case study locations had a nonzero probability of being included, and that probability could be computed for any item. Each sample item for a test location was subsequently weighted in our analysis to account statistically for all items in the population for that location, including those that were not selected. We performed tests on statistical samples of IT equipment inventory transactions at each of the four case study locations to assess whether the system of internal control over physical IT equipment inventory was effective (i.e., provided reasonable assurance of the reliability of inventory information and accountability of the individual items). For each IT equipment item in our statistical sample, we assessed whether (1) the item existed (meaning that the item recorded in the inventory records could be located), (2) inventory records and processes provided adequate accountability, and (3) identifying information (property number, serial number, model number, and location) was accurate. We explain the results of our existence tests in terms of control failures related to missing items and recordkeeping errors. The results of our statistical samples are specific to each of the four test locations and cannot be projected to the population of VA IT inventory as a whole. We present the results of our statistical samples for each population as point estimates representing (1) our projection of the estimated error overall for each control attribute and (2) the two-sided, 95 percent confidence intervals for the failure rates. To assess VA’s controls over computer hard drives in the property disposal process, at each HCS and VA headquarters we interviewed OIT officials, observed hard drive storage locations, and obtained copies of VA documentation related to hard drives in the disposal process at the time of our site visits. Our investigators supported our tests of IT physical inventory controls by assessing the physical security of various IT equipment storage facilities at each of our four case study locations. As part of our assessment, one of our investigators interviewed VA Police at the three HCS locations and federal agency law enforcement officers at VA headquarters and met with physical security specialists at each of the test locations to discuss the results of our physical security inspections and the status of VA actions on identified weaknesses. We briefed VA managers at our three HCS test locations and VA headquarters, including VA HCS directors and OIT and property management officials, on the details of our audit, our findings, and their implications. On July 15, 2008, we requested comments on a draft of this report. We received comments from the Secretary of Veterans Affairs on July 28, 2008, and we had follow-up discussions with cognizant VA officials. We have summarized VA’s comments and our follow-up discussions in the Agency Comments and Our Evaluation section of this report. We conducted this performance audit from January 2008 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We performed our investigative work in accordance with standards prescribed by the President’s Council on Integrity and Efficiency. Table 10 lists the 12 recommendations from our 2007 report, summarizes VA’s actions, and presents the status of each recommendation. VA property officials from the Office of Acquisition and Logistics (OAL) and officials in the Office of Information and Technology (OIT) worked together to create a new VA Handbook 7002, Logistics Management Procedures, which updates VA policy for property management, including specific policy pertaining to information technology (IT) equipment. The Assistant Secretary for Management mandated early implementation of VA Handbook 7002 on July 3, 2008. Table 11 lists the 6 property-related recommendations from our 2004 report, summarizes VA’s actions, and presents the status of each recommendation. Table 12 summarizes Report of Survey information related to VA’s 2007 physical inventories of IT equipment for the 12 case study locations covered in our 2004, 2007, and 2008 audits. We used the original acquisition value as the best available data for the cost of IT equipment items that could not be located during VA’s 2007 physical inventory. In addition to the contact named above, Gayle L. Fischer, Assistant Director; Andrew O’Connell, Assistant Director and Supervisory Special Agent; F. Abe Dymond, Assistant General Counsel; Doreen S. Eng, Assistant Director; Bamidele A. Adesina; James D. Ashley; Deyanna J. Beeler; Francine M. DelVecchio; Lauren S. Fassler; Steven M. Koons; Kelly A. Richburg; Ramon J. Rodriguez, Special Agent; Daniel E. Silva; Chevalier C. Strong; Danietta S. Williams; and Matthew L. Wood made key contributions to this report.
In July 2004, GAO reported that the six Department of Veterans Affairs (VA) medical centers it audited lacked a reliable property control database and effective inventory policies and procedures. In July 2007, GAO reported that continuing internal control weaknesses over IT equipment at four case study locations at VA resulted in an increased risk of theft, loss, and misappropriation of IT equipment assets. GAO's two reports included 18 recommendations to improve internal control over IT equipment. GAO was asked to perform a follow-up audit to determine (1) whether VA has made progress in implementing GAO's prior recommendations for improving internal control over IT equipment and (2) the effectiveness of VA's current internal controls to prevent theft, loss, or misappropriation of IT equipment. GAO reviewed policies and other pertinent documentation, statistically tested IT equipment inventory controls at four geographically disparate locations, and interviewed VA officials. VA has made significant progress in addressing prior GAO recommendations to improve controls over IT equipment. Of the 18 recommendations GAO made in its two earlier reports, VA completed action on 14 recommendations, partially implemented action on 2 recommendations, and is working to address the 2 remaining open recommendations. These recommendations focused on strengthening policies and procedures to establish a framework for accountability and control of IT equipment. If effectively implemented, VA's July 2008 policy changes would address many of the control weaknesses GAO identified. Mandated early implementation of this new policy addresses user-level accountability and requirements for strengthening physical security. In addition, to determine the extent of inventory control weaknesses over its IT equipment, VA performed a departmentwide physical inventory in 2007. However, as of May 15, 2008, VA reported that it could not locate about 62,800 IT equipment items, of which 9,800 could have stored sensitive information. Because VA does not know what, if any, sensitive information resided on the equipment, potentially affected individuals could not be notified. GAO's statistical tests of IT equipment inventory controls from February through May 2008 at four locations identified continuing control weaknesses, including missing items, lack of accountability, and errors in IT equipment inventory records. Although these control weaknesses may be addressed through early implementation of the July 2008 policies, the fact that GAO identified missing items only a few months after these locations had completed their physical inventories is an indication that underlying weaknesses in accountability over IT equipment have not yet been corrected. GAO's tests identified 50 missing items, of which 34 could have stored sensitive data, but again, notifications to individuals could not be made. Further, the lack of user-level accountability and inaccurate records on status, location, and item description of IT equipment items at the four case study locations make it difficult to determine the extent to which actual theft, loss, or misappropriation of IT equipment may have occurred. In addition, the four locations had weaknesses in controls over hard drives in the property disposal process as well as physical security weaknesses at IT storage facilities. These control weaknesses present a risk that VA could lose control over new, used, and excess IT equipment and that any sensitive personal and medical information residing on hard drives in this equipment could be compromised.
Ports are critical gateways for the movement of commerce through the global supply chain. According to CBP data, in fiscal year 2012, about 11.5 million cargo container shipments arrived from more than 650 foreign ports—meaning roughly 31,000 maritime container shipments arrived each day that year. The facilities, vessels, and infrastructure within ports, and the cargo passing through them, all have vulnerabilities that terrorists could exploit. Every time responsibility for cargo in containers changes hands along the supply chain there is the potential for a security breach. While there have been no known incidents of containers being used to transport WMDs, criminals have exploited containers for other illegal purposes, such as smuggling weapons, people, and illicit substances. Figure 1 illustrates the notional key points of transfer involved in the global supply chain—from the time that a container is loaded with goods at a foreign factory to its arrival at the U.S. seaport and ultimately the U.S. importer. DHS has taken steps to secure the global supply chain, including the cargo in oceangoing containers destined for the United States. DHS’s strategy includes focusing security efforts beyond U.S. borders to target and examine high-risk cargo and vessels before they enter U.S. seaports. DHS’s strategy is based on a layered approach of related programs that attempt to focus resources on potentially risky foreign ports, vessels, and cargo container shipments while allowing other cargo container shipments to proceed without unduly disrupting the flow of commerce into the United States. DHS’s maritime security programs support the National Strategy for Global Supply Chain Security, which emphasizes risk management and coordinated engagement with key stakeholders who also have supply chain roles and responsibilities. Figure 2 shows DHS’s key maritime security programs and the various segments in the global supply chain where these programs are focused. CSI is a program that aims to identify and examine U.S.-bound cargo container shipments that could pose a high risk of concealing WMDs or other terrorist contraband by reviewing advanced cargo information about the shipments. As part of the CSI program, CBP officers are stationed at select foreign seaports to identify high-risk U.S.-bound container cargo shipments before they are loaded onto U.S.-bound vessels. As of July 2013, there were 58 CSI ports in 32 countries that, collectively, account for over 80 percent of the container shipments imported into the United States. In addition to the CSI ports where CBP placed targeters, CBP also entered into arrangements with Australia and New Zealand to remotely target U.S.-bound cargo container shipments from the United States. A complete listing of the countries that participate in the CSI program can be found in appendix I. CBP officers stationed at foreign CSI ports are to conduct the following activities: Target U.S.-bound container shipments. As we previously reported, CBP targeters use ATS and other information to electronically review information about U.S.-bound shipments departing from the foreign port—a process CBP refers to as screening. CBP targeters review the ATS risk scores and additional information to identify high-risk shipments with a potential nexus to terrorism—a process referred to as targeting. The CBP targeters make a final determination about which containers are high risk and will be referred to host government officials for examination. Request examinations of high-risk container shipments. According to our work and updates provided by CBP officials, CBP targeters work with host country government officials to mitigate high- risk container shipments. Actions may include resolving discrepancies in shipment information, scanning cargo containers’ contents with radiation detection or imaging equipment (as shown in fig. 3), or conducting physical inspections of the containers’ contents. According to our prior work and updates provided by CBP officials, C- TPAT aims to secure the flow of goods bound for the United States by developing a voluntary public-private sector partnership with stakeholders of the international trade community. C-TPAT partners agree to adhere to the program’s eight established minimum security criteria in areas such as physical security, personnel security, and information technology. C- TPAT partners also agree to provide CBP with information regarding their security processes and procedures and allow CBP to validate or verify that these security measures are in place. In return, C-TPAT partners receive various incentives, such as reduced examinations based upon lower risk scores. In addition to the CBP programs, the Coast Guard also has an internationally focused maritime security program, the IPS program. Under the IPS program, Coast Guard officials visit foreign ports to evaluate their antiterrorism security measures against established International Ship and Port Facility Security (ISPS) Code standards. In addition, the Coast Guard collects and shares best practices with foreign countries and engages in efforts to help facilitate a comprehensive and consistent approach to maritime security in ports worldwide. Coast Guard officials reported that from its inception in April 2004 through June 2013, IPS program officials have visited port facilities in 151 countries and overseas protectorates engaged in maritime trade with the United States. According to its visits and the information provided by the foreign countries as part of those visits, the Coast Guard determines whether the countries have effectively implemented the ISPS Code and are maintaining effective security measures in their ports. If the Coast Guard finds that a country is not maintaining port security measures, the Coast Guard can impose conditions of entry on vessels arriving at the United States from that country. The Coast Guard uses the results of the port risk assessments to help decide which foreign vessels to board or inspect through its Port State Control program, according to the U.S. Coast Guard International Port Security Program: Annual Report 2012. While the Port State Control program does not directly affect container security, as part of this program, the Coast Guard uses risk-based criteria to identify which foreign vessels entering U.S. ports and waterways it considers to be at risk of noncompliance with international or domestic regulations, and performs compliance examinations of these vessels. The risk-based criteria include the vessel’s management, the flag state that the vessel is registered under, the vessel’s recognized security organization, and the vessel’s security compliance history resulting from previous examinations. Through mutual recognition arrangements with foreign partners, the security-related practices and programs taken by the Customs or maritime security administration of one partner are recognized and accepted by the administration of another. Both CBP and the Coast Guard have entered into such arrangements. For example, CBP can expand the reach of its supply chain security programs through MRAs. According to the World Customs Organization, mutual recognition allows Customs administrations to target high-risk shipments more effectively and expedite low-risk shipments by, for example, reducing redundant examinations. The World Customs Organization distinguishes between mutual recognition of Customs controls and mutual recognition of authorized economic operator (AEO) programs: Mutual recognition of Customs controls (Customs-to-Customs MRAs): This is achieved when, for example, the Customs administrations of two countries have confidence in and accept each other’s procedures for targeting and inspecting cargo shipped in containers. Mutual recognition of AEO programs (AEO MRAs): This occurs when Customs administrations agree to recognize one another’s AEO programs and security features and to provide comparable benefits to members of the respective programs. In the United States, C-TPAT is the designated AEO program and businesses participating in the program are AEOs. According to C-TPAT documentation, CBP has developed an AEO MRA process involving four phases: (1) a comparison of the program requirements to determine if the programs align on basic principles; (2) a pilot program of joint validation visits to determine if the programs align in basic practice; (3) the signing of an MRA; and (4) the development of mutual recognition operational procedures, primarily those associated with information sharing. MRAs are based on close working relationships between Customs administrations, which allow for the exchange of information, intelligence, and documents in an effort to assist countries in the prevention and investigation of Customs offenses. The Coast Guard can also enter into MRAs that recognize international maritime security practices of other foreign governments. For example, the Coast Guard has a process in place to recognize the port inspection procedures of other countries. Although DHS’s maritime security programs support the National Strategy for Global Supply Chain Security and the strategy’s risk-informed security approach, the SAFE Port Act included requirements that pilot projects be established to test the feasibility of scanning 100 percent of U.S.-bound cargo containers at foreign ports. Subsequently, the Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Act) required, among other things, that by July 2012, 100 percent of U.S.- bound cargo containers be scanned at foreign ports with both radiation detection and nonintrusive inspection (imaging) equipment before being placed onto U.S.-bound vessels. In June 2008 and in October 2009, we found that CBP faced numerous challenges in implementing the 100 percent scanning requirement at the pilot ports. In October 2009, we recommended, among other things, that CBP conduct feasibility and cost-benefit analyses of implementing the 100 percent scanning requirement and provide the results to Congress along with any suggestions of cost-effective alternatives to implementing the 100 percent scanning requirement, as appropriate. CBP partially concurred with the recommendations but did not implement them. According to CBP officials, CBP does not plan to conduct these analyses related to achieving the 100 percent scanning requirement because the pilot project has been reduced in scope and currently there are no funds to conduct such analyses. In February 2012, we reported that the scanning challenges continued, and CBP achieved 100 percent scanning of U.S.-bound cargo containers at only one foreign pilot port where it was being attempted—Port Qasim, Pakistan. In May 2012, the Secretary of Homeland Security announced a 2-year extension of the deadline—until July 2014—for implementing the requirement that cargo containers not enter the United States unless they are scanned at foreign ports prior to being loaded on vessels. In its report to Congress that same month, DHS stated that it recognizes the need to proceed with its container security programs in a manner that maximizes the security of maritime cargo and facilitates its movement. DHS added that it plans to continue working with other federal agencies and international partners to develop technology and enhance risk management processes, in addition to continuing its existing container security programs. According to the January 2013 National Strategy for Global Supply Chain Security Implementation Update, DHS is working to identify potential alternatives to 100 percent scanning, and a senior DHS official told us that DHS’s layered security strategy will be a key component of the alternative. The Coast Guard and CBP, DHS components with maritime security responsibilities, have developed models to assess the risks of foreign ports and the cargo carried by vessels from these ports. The Coast Guard uses the model it developed to inform operational decisions for its IPS program and updates its assessment annually. In contrast, in 2009, CBP developed a risk model to begin the process of expanding its efforts to scan 100 percent of U.S.-bound container shipments, but the model was never implemented. As a result, it does not know whether the ports included in CSI remain valid. The Coast Guard has developed a risk-informed model as part of its IPS program to regularly assess the potential threat foreign ports pose to the maritime supply chain and make operational decisions regarding foreign ports’ security measures. According to the 2012 IPS program annual report, this risk model includes four components, summarized below, that help the Coast Guard focus IPS program resources. Country threat. The Coast Guard uses security and commerce data as well as measures on government decision making, such as the prevalence of corruption, to assess the likelihood of terrorists using a foreign port to import WMDs or other contraband into the United States. In particular, the Coast Guard relies on CBP trade information, the U.S. Department of State’s Security Environment Threat List, World Bank reports, and other data to determine whether countries represent a normal, medium, or high security risk. Foreign port assessment. MTSA, as amended by the SAFE Port Act, requires the Coast Guard to reassess countries’ ports every 3 years, and during these visits, IPS officials use two data checklists, one that assesses government performance and one that assesses facilities’ performance. The government performance checklist measures how well a government gathers and assesses information on security threats, and reviews and approves port facility security plans, among other things. The facilities performance checklist measures port security measures implemented to prevent unauthorized cargo and people from entering the port. Such security measures include, for example, perimeter security and access procedures for port facility employees and visitors. Country responsiveness. The IPS model includes measures of the political, economic, and social conditions in a country to help determine whether countries are likely to efficiently utilize Coast Guard assistance. The model incorporates information on corruption, inflation, and “people measures,” such as infant mortality rates and literacy rates. Country wealth. The IPS model includes a measure of national income to determine if the country can afford to maintain security measures on its own or whether it is likely to require foreign assistance. According to the 2012 IPS program annual report, the Coast Guard combines these components into a single risk model and uses the results to make informed decisions on how to engage each country with the IPS program, including (1) how often to visit ports, (2) how many staff to assign to a particular visit, and (3) whether the country requires assistance. Specifically, the Coast Guard visits foreign ports in higher-risk countries more frequently (and with more IPS officials) than in ports in lower-risk countries, which we discuss later in this report. In addition, the IPS annual report states that the Coast Guard uses the country threat component of the IPS risk model to help determine which foreign vessels to board as part of its Port State Control program. The Coast Guard updates its risk model annually. While elements of the Coast Guard’s risk model could be used to inform maritime container security efforts, there are limits regarding how it can be applied to maritime supply chain security because the IPS program is focused on assessing port security. Unlike the CBP risk model described below, the Coast Guard’s model is not designed to assess the risk of maritime cargo shipments imported from foreign ports (e.g., transshipped cargo). In 2002, CBP selected the initial 23 CSI ports largely on the basis of the volume of U.S.-bound container cargo, but increased the number of risk factors in selecting additional ports as it expanded the CSI program beginning in 2003. Specifically, according to CBP documentation, volume was a key criterion for assessing which foreign ports represented the greatest threat to the United States. Figure 4 shows the large number of containers shipped through the Port of Singapore, one of the original CSI ports. After selecting these initial 23 ports, CBP subsequently added 35 ports to the CSI program from 2003 through 2007 on the basis of additional criteria, such as strategic threat factors and diplomatic or political considerations. Through these expansion efforts, in 2007 CBP reached its goal of staffing 58 CSI ports that, collectively, cover over 80 percent of U.S.-bound container shipments. We reported in 2008 that CBP did not have plans to add other ports to the CSI program because, according to CBP, the costs associated with expanding the program would outweigh the potential benefits. In 2009, CBP developed a risk model in conjunction with DOE to begin the process of expanding its efforts to scan 100 percent of U.S.-bound container shipments for a related program, but the model was never implemented. In particular, in April 2009, the Secretary of Homeland Security approved the “strategic trade corridor strategy” as an approach to expanding CBP’s efforts to scan U.S.-bound container cargo beyond the original pilot locations. As part of this expansion effort, CBP developed a model—assisted by DOE—to rank potential foreign ports on the basis of risks associated with countries and maritime commerce, as well as the number and percentage of high-risk, U.S.-bound shipments processed. Specifically, DOE provided the country threat and shipping lane information from the model it used to identify and prioritize foreign ports for participation in the Megaports Initiative, and CBP provided the high-risk shipment data from ATS. CBP and DOE completed their initial analyses in February 2009, which identified 356 potential expansion ports ranked by risk, and CBP narrowed the list down to 187 ports by considering only ports that had at least 1,000 shipments per year to the United States. CBP collaborated with DOE, the Department of State, and the intelligence community to prioritize 22 ports for expansion of 100 percent scanning efforts on the basis of such factors as the model’s risk ranking and the volume of U.S.-bound cargo container shipments. CBP ultimately did not pursue this strategy, given cargo security program budget cuts and the Secretary of Homeland Security’s decision to extend the deadline for 100 percent scanning until July 2014. The results of the 2009 strategic trade corridor prioritization model show that the CSI program is operating at some of the riskiest foreign ports, but it also operates at ports that are less risky. Since the model focused on U.S.-bound maritime containerized cargo, its results could be used as a proxy measure to assess whether CSI ports coincide with those foreign locations that pose the greatest risk to the global supply chain. We combined the risk rankings for the 356 ports in the 2009 model with fiscal year 2012 U.S.-bound shipment data and excluded ports with fewer than 1,000 U.S.-bound shipments per year, which narrowed the list to 138 ports. Comparing the CSI ports with the results shows that CSI did not have a presence at about half of the ports CBP considered higher risk, and about one-fifth of the existing CSI ports were at lower-risk locations. Specifically, of the 61 current CSI ports, 57 had at least 1,000 U.S.-bound shipments in fiscal year 2012. Of these 57 CSI ports, 27 were within the top 50 riskiest ports, 18 ports were between the 51st and 100th riskiest ports, and 12 ports were not among the top 100 riskiest ports. Of the remaining 4 CSI ports, 3 had fewer than 1,000 U.S.-bound shipments and 1 port was not ranked in the 2009 risk model. According to CBP officials, CBP has not established CSI locations in 15 of the top 50 riskiest ports either because host governments have not been cooperative regarding CBP cargo examination requests or CBP was not able to negotiate an arrangement with host governments to establish CSI operations, as discussed below. CBP officials stated that factors have changed since the model was developed in 2009, and they do not consider all of the same ports to be high risk at this time. For example, one potential expansion port the model classified as higher risk in 2009 now ships fewer containers to the United States, and CBP officials reported that they would not currently consider including this port in the CSI program. Further, according to CBP’s fiscal year 2012 budget submission, CBP considered closing several CSI ports while maintaining CSI operations in strategically important ports. Given this information, and the fact that the number and location of CSI ports has generally not changed since 2009, the CSI program’s current locations may not be in alignment with the highest-risk ports. Because the CSI program depends on the willingness of sovereign host countries to participate in the program, there are challenges to implementing CSI and CBP efforts to negotiate with other countries to expand the CSI program, and these efforts have not always been successful. CBP and the Department of State point to challenges in implementing CSI in high-risk countries, such as CBP officer safety, funding concerns, and the willingness of host country governments to facilitate requested cargo examinations of U.S.-bound shipments. CBP officials stated that CBP is not pursuing the strategic trade corridor strategy, but they noted that since the beginning of the CSI program, CBP has made efforts to negotiate to establish CSI ports within four countries that have ports representing potential significant risks. These efforts were not successful in three countries for political reasons. For example, the legislature in one of these countries did not approve the placement of CSI in its country. However, according to CBP officials, CBP has signed a declaration of principles to place CSI in an additional foreign country and estimates that CSI will be operational within this country by the end of fiscal year 2014. CBP has not assessed the risk of foreign ports that ship cargo to the United States for its CSI program since completing the CSI expansion analysis in 2005. CBP officials stated they have not performed any such risk assessments since 2005 because CBP does not have any specific expansion plans for the CSI program. However, our work indicates that CBP may expand CSI. In particular, CBP’s fiscal year 2013 and 2014 budget requests noted that CBP may expand CSI in the future to additional countries of strategic interest, if feasible; and CBP officials told us that CBP is finalizing negotiations with a foreign government to expand CSI to an additional port, as discussed above. We acknowledge that CBP may face challenges in including foreign ports that ship the riskiest cargo to the United States in its CSI program, but expanding CSI without assessing the security risk posed by foreign ports is contrary to agency policy. In particular, according to the CSI Statement of Policy and Intent signed by the CBP Commissioner in April 2011, CBP is to prioritize CSI expansion locations in accordance with the National Strategy for Global Supply Chain Security, which states that the federal government should take a risk-informed approach to secure the global supply chain. Further, the SAFE Port Act provides that DHS/CBP is to assess the costs, benefits, and other factors associated with designation of a CSI port, including the level of risk for the potential compromise of containers by terrorists, or other threats as determined by DHS; the volume of cargo being imported to the United States directly from, or being transshipped through, the foreign seaport; and the results of the Coast Guard’s IPS assessments. In addition to not completing a risk assessment to help inform potential CSI expansion, CBP has also not assessed the risk of its current CSI ports—some of which have participated in CSI for more than a decade— to determine if they remain valid on the basis of risk. CBP officials stated that they have not conducted such an assessment because a couple of factors make it difficult to close CSI ports and reallocate resources to prospective new CSI ports. In particular, the officials stated that (1) removing CSI from a country might negatively affect political relations with the host government, and (2) uncertain CSI funding in future years could make it difficult for CBP to make plans to close lower-risk CSI ports and open new CSI ports at higher-risk locations. Specifically, CBP officials estimate that it could take about 1 year to close a CSI port and 2 years or more to open a new port, and, given budget uncertainties, CBP has not pursued such efforts. It is unclear if the political and cost challenges CBP officials identified would affect any reallocation of CSI resources to prospective new CSI ports, but these challenges do not preclude CBP from assessing the risk of its current CSI locations. Regarding the impact of changes to the CSI program on political relations, CBP officials stated they routinely speak to host government officials during CSI evaluations about how to strengthen the program, but these officials said that the discussions have not specifically included the impact on relations with the host government of removing lower-risk ports from the CSI program. Further, it is unclear if reallocating resources from current CSI ports to higher-risk ports would ultimately increase costs because some costs—such as staffing costs and office space leases—could be lower in some of the new locations than costs in the lower-risk ports it would be leaving. Moreover, the DHS National Infrastructure Protection Plan and our Risk Management Framework state that risk assessments, the effectiveness of measures to deal with risks, and the costs of those measures are to inform decisions. Our framework also states that agencies should periodically evaluate the cost-effectiveness of their programs and that mechanisms for altering a program should be in place based on current risk data. In addition, the DHS National Infrastructure Protection Plan states that effective protective programs seek to use resources efficiently by focusing on actions that offer the greatest mitigation of risk for any given expenditure. The plan also states that risk management includes a feedback loop that continually incorporates new information, such as changing threats or the effect of actions taken to reduce or eliminate identified threats, vulnerabilities, or consequences. We recognize that it may not be possible to include all the higher-risk ports in CSI because CSI requires the cooperation of sovereign foreign governments and because of concerns regarding the security of U.S. personnel that may be staffed in those countries. Nevertheless, given that CBP is no longer pursuing implementation of 100 percent scanning, it is important that CBP apply the risk management principles discussed above to CSI—a risk-informed program—to more effectively mitigate the threat of high-risk cargo before it is shipped to the United States. Periodically assessing the risk level of cargo shipped from foreign ports and using the results of these risk assessments to inform the CSI locations would help ensure that CBP is allocating its resources to provide the greatest possible coverage of high-risk cargo to best mitigate the risk of importing WMDs or other terrorist contraband into the United States through the maritime supply chain. DHS, through the Coast Guard and CBP, has taken a number of steps to improve the efficiency and effectiveness of its maritime security programs to reduce global supply chain risks. In this regard, the Coast Guard’s actions have primarily been focused on the IPS program. CBP has continued its efforts to expand or refine its C-TPAT and CSI programs, but faces host country political and legal constraints. The Coast Guard has worked to use resources more effectively and reduce risks at foreign ports and from U.S.-bound vessels through its IPS program by implementing a risk-informed model that prioritizes the countries to visit and provide with assistance. When the Coast Guard first implemented the IPS program in 2004, it was required by MTSA to assess the effectiveness of antiterrorism measures maintained in ports where U.S. vessels call or from which vessels depart for the United States. As a result, the Coast Guard focused on completing initial visits of foreign ports to determine ISPS Code compliance, but did not have a methodology to prioritize follow-up visits and help countries increase their level of port security. To accomplish these goals, in 2005, the Coast Guard began developing its IPS risk model to assess the risks of foreign ports and prioritize assistance, which it fully integrated into IPS operations in 2011. The Coast Guard classifies countries as normal, medium, or high security risks and completes port security checklists during foreign port visits. According to the 2012 IPS program annual report, the Coast Guard uses the results of its risk assessments to help determine the amount of resources needed to visit foreign countries’ ports, board foreign vessels, and track port security improvements. Specifically, the Coast Guard uses the risk model results to more efficiently and effectively allocate resources to help ensure that visits to foreign ports in higher-risk countries occur more frequently (and with more IPS officials) than to ports in lower-risk countries. Table 1 provides information on Coast Guard IPS program visits, by country risk level, for fiscal year 2012. IPS program officials we met with that are responsible for assessing ports in Africa and Southeast Asia stated that this risk-informed approach helps the Coast Guard more efficiently use its resources. Further, the IPS program has enabled the Coast Guard to measure foreign countries’ port security based on improvements its officials observe when completing foreign port visits. According to the 2012 IPS program annual report, port assessment scores have improved worldwide since the Coast Guard initiated the IPS program in 2004. The Coast Guard attributes this success, in part, to implementation of the IPS risk model. According to the 2012 IPS program annual report, the Coast Guard also uses the results of the IPS model to allocate foreign assistance. The risk model includes (1) country threat information; (2) port visit results; (3) a determination of which countries are most likely to benefit from assistance to improve port security, such as port security training; and (4) the individual country’s ability to best use assistance funds and sustain security efforts, as discussed earlier in this report. The 2012 report also states that Coast Guard officials are to use this information to direct resources to those foreign countries where they believe the return on investment will be greatest. Further, this report states that the Coast Guard uses the results of the IPS risk model to help determine which foreign vessels to board as part of its Port State Control program. The risk-based screening tool the Coast Guard uses to select vessels to board assigns point values to various risk factors, such as country threat data from the IPS risk model. In addition, the Coast Guard boards foreign vessels that have recently stopped in higher-risk ports (i.e., countries that have not substantially implemented the ISPS Code). In addition to prioritizing resources through its IPS risk model, the Coast Guard has worked with foreign governments to mutually recognize each other’s maritime security programs, which can more efficiently use IPS resources and reduce risks. For example, in September 2012, the Coast Guard signed a memorandum of understanding (MOU) with the European Union that establishes a process for mutually recognizing security inspections of each other’s ports. The European Union has developed regulations for the consistent implementation of the ISPS Code by its member states and established a process for verifying the effectiveness of its member states’ maritime security measures. This process includes European Union inspections of member states’ ports that result in reports that (1) identify any nonconformities with the regulations and (2) make recommendations to address any nonconformities. Under the MOU procedures, the Coast Guard recognizes a successful European Union inspection of its member states’ ports in the same manner as it would recognize a successful country visit by Coast Guard IPS inspectors. Coast Guard IPS officials stated that they have collaborated with their European counterparts to develop standard operating procedures for these port inspections and they were used in a recent joint inspection of a container facility in Felixstowe, the United Kingdom. According to DHS documents and Coast Guard IPS officials in Europe, by signing this MOU, the Coast Guard plans to reassign some IPS officials from Europe to Africa, where certain countries are having more difficulties in implementing effective antiterrorism measures in their ports. Coast Guard IPS officials reported, however, that a trade-off of signing the MOU is that its IPS officials will not have the same opportunities to have face-to-face interactions and share port security information and practices directly with their European Union counterparts as in the past. Despite this trade-off, the Coast Guard IPS officials stated that entering into such arrangements increases efficiencies and noted that they intend to negotiate additional MOUs with other foreign governments that have strong port inspection programs. CBP has worked with foreign partners to mutually recognize each other’s AEO programs to more efficiently use resources while continuing to reduce risks to the global supply chain. According to the World Customs Organization, as of June 2013, there were 25 AEO programs worldwide, other than C-TPAT, with which CBP could enter into an MRA. As part of the evaluation of a foreign partner’s capacity for entering into an MRA, CBP conducts joint validations with the other partner to ensure that a partner’s AEO program has security standards that are equivalent to those required by the C-TPAT program. CBP officials stated that CBP does not pursue mutual recognition with a Customs administration that does not have an equivalent AEO program in place because doing so could compromise the security of U.S.-bound container shipments. As of July 2013, CBP had signed MRAs with seven foreign Customs administrations—New Zealand in 2007, Canada and Jordan in 2008, Japan in 2009, the Republic of (South) Korea in 2010, and the European Union and the Taipei Economic and Cultural Representative Office (Taiwan) in 2012—and is in the process of negotiating MRAs with five other partners. CBP officials stated that they expect to complete MRA negotiations with one partner by the end of fiscal year 2013 and that they generally complete one or two MRAs each year. To help foreign countries establish AEO programs, CBP officials stated that the C-TPAT program provides training and technical assistance for foreign Customs agencies that request technical assistance. As of April 2013, CBP officials reported that C-TPAT has provided assistance to about 70 foreign countries and noted that this assistance improves global supply chain security. Further, CBP officials told us that the goal of this assistance is to establish AEO-MRAs with foreign Customs agencies as a means to increase efficiencies in supply chain security efforts. According to CBP officials, by relying on MRA partners to validate supply chain security procedures overseas, CBP is able to operate more efficiently by reducing the costs associated with conducting security validations. For example, in 2010, CBP completed a study on AEO validation visits conducted on its behalf in Japan and Canada by the respective host governments. On the basis of cost data from prior validation visits, CBP estimates the C-TPAT program saved over $290,000 and over 1,500 staff hours by accepting the 90 validations completed by the Japanese and Canadian governments during 2009 and 2010. Further, according to CBP officials, mutual recognition leads to a common understanding of global supply chain security standards, resulting in greater program efficiency and a streamlined validation process by reducing the number of redundant validations. As a result, mutual recognition enables CBP to focus its resources on higher-risk supply chains. CBP officials also stated that AEO program officials are in a better position to conduct validations of companies within their respective AEO programs because these officials are proficient in the local language and are more familiar with the companies’ supply chains. MRAs can increase efficiencies in the C-TPAT program, but CBP faces challenges in implementing MRAs. According to C-TPAT data, since 2009, CBP has accepted over 480 validations conducted by staff from foreign governments that have signed MRAs with the United States. Further, these data show that the number of validations conducted by MRA partners has increased significantly each year from 2009 (26) through 2012 (285), and CBP officials stated that they expect the number of validations to continue to increase because the European Union and Taiwan—two of the United States’ largest trading partners—are expected to conduct more validations in 2013. While MRAs have resulted in increased efficiencies, CBP and foreign government officials we met with identified challenges in implementing MRAs. For example, CBP and foreign government officials we met with stated that exchanging data across information technology systems can be difficult, and government officials from one foreign partner stated that differences in privacy laws between partners can create additional hurdles to information sharing. As a result, it may take time for the benefits to be evident to the AEO partners. Specifically, private sector trade officials in one country we visited reported that they had not yet realized the benefits of the MRA through reduced inspections of their shipments at U.S. ports. In addition, World Customs Organization officials we met with said that it may be difficult to document the benefits of MRAs through reduced inspections because U.S. agencies other than CBP also have their own inspection procedures for imported cargo that are not part of any MRA. For example, according to CBP, the Food and Drug Administration has its own inspection process. As a result, MRA participants’ shipments could still be slowed. According to CBP officials, CBP is working with other federal agencies to harmonize the inspection process at ports of entry and accelerate inspection decision making to address this issue. CBP has entered into AEO-MRAs with other partners, but does not have plans to negotiate Customs-to-Customs MRAs. Under a Customs-to- Customs MRA, joint activities, such as identifying cargo for examination, would not require the placement of CBP targeters in foreign ports under programs like CSI. CBP officials said they do not have plans to negotiate Customs-to-Customs MRAs because they are much more difficult to achieve than AEO-MRAs, in part, because of the difficulties in ensuring Customs practices are applied consistently. For example, CBP officials said that Customs-to-Customs MRAs would need to include a broader validation of foreign Customs administrations’ practices. World Customs Organization officials we met with concurred that achieving mutual recognition of Customs controls is difficult and noted that the focus of Customs administrations worldwide is on negotiating AEO-MRAs rather than Customs-to-Customs MRAs. CBP has also made efforts to improve the efficiency and effectiveness of its C-TPAT program—and thus the security of the global supply chain— by increasing the number and category of C-TPAT members. For example, CBP has increased C-TPAT membership by conducting outreach events to increase awareness of the C-TPAT program and incentives. From fiscal years 2008 through 2012, the number of C-TPAT members increased by 15 percent—from 8,882 to 10,425. According to the 2013 DHS Annual Performance Report, as of fiscal year 2012, C- TPAT members account for more than 50 percent of all U.S. cargo imports (by value), which exceeds CBP’s performance target goal of 45 percent. Further, as part of C-TPAT’s membership expansion efforts, the program is considering adding two supply chain sectors—exporters and distribution centers. CBP officials reported that C-TPAT selected these sectors because they can have a direct impact in securing the global supply chain. Moreover, according to the 2012 C-TPAT Strategy Action Plan, increased membership in the C-TPAT program could allow U.S. ports of entry to operate more efficiently because CBP officials at these ports would be able to focus CBP’s targeting and inspection resources on a smaller percentage of high-risk shipments. Although expansion of C-TPAT membership should increase program efficiencies systemwide, CBP faces challenges in increasing C-TPAT effectiveness because of staffing challenges. In particular, while the C- TPAT program has continued to expand in size and scope in recent years, staffing within the program has decreased. Specifically, according to CBP officials, as of July 2013, the C-TPAT program had 155 staff, down from a peak of 196 staff in January 2011. CBP plans to take several steps to address this staffing challenge. For example, CBP officials reported that as of July 2013, C-TPAT is working with CBP’s Office of Human Resources to hire 11 additional Supply Chain Security Specialists. Furthermore, according to fiscal year 2014 CBP budget documentation, CBP plans to extend the C-TPAT revalidation cycle to once every 4 years as mandated by the SAFE Port Act rather than accelerating the revalidation schedule to once every 3 years as CBP had previously done. Moreover, C-TPAT officials reported that CBP anticipates a reduction in foreign validation visits by its specialists through the implementation of MRAs. An additional challenge to C-TPAT program effectiveness is that C-TPAT partners’ compliance rates with program security requirements decreased from almost 100 percent in fiscal year 2008 to about 95 percent in fiscal year 2012. According to CBP documentation, the overall compliance rate decreased after CBP strengthened C-TPAT security criteria and increased program oversight. CBP reported that C-TPAT is working with C-TPAT partners to explain the enhanced security criteria to ensure they understand the validation requirements. CBP officials said that they expect this will lead to improvements in C-TPAT partners’ compliance with the security requirements. As a result of reduced program budgets in recent years, CBP has implemented CSI changes to take advantage of improvements in technology and more efficiently use its CSI targeters, but efficiencies are limited by host country political and legal factors. Specifically, CSI program expenditures declined by more than $50 million from fiscal years 2008 through 2012, and this cut led to changes in how CBP has staffed its CSI ports. As shown in figure 5, CBP employs a variety of approaches in targeting and examining U.S.-bound containerized cargo imported from CSI countries. These targeting approaches are explained below. National Targeting Center-Cargo (NTC-C) support. In April 2005, we recommended that CBP revise the CSI targeting approach to consider what functions need to be performed at CSI ports and what functions can be performed in the United States. CBP agreed with this recommendation and, in January 2009, began transferring some CSI staff from overseas ports to perform targeting remotely from the NTC-C. According to CBP officials, NTC-C staff are less costly than overseas staff. Under this revised targeting approach, NTC-C targeters review U.S.-bound shipments from foreign ports in 6 CSI countries. For those shipments that NTC-C targeters determine to be high risk or suspect, NTC-C targeters request that host government Customs officials complete examinations and electronically provide the results to NTC-C staff. Further, according to CSI officials, NTC-C targets all shipments ATS categorizes as lower risk in an additional 6 CSI countries so that CSI targeters in those 6 countries can concentrate their reviews on the higher- risk shipments. According to CBP officials, implementation of this targeting approach allows CBP to staff high-volume ports with fewer CSI targeters. Our analysis of CSI staffing data shows that staffing of CBP targeters that support CSI at the NTC-C increased by 56 percent from fiscal years 2009 through 2012—from 27 to 42. Changes in CBP’s staffing of in-country targeters are discussed below. Regional hub model. In 2011 and 2012, CBP implemented a regional hub model whereby CSI targeters are stationed at one port but target for multiple ports within the same country to reduce staff and thereby increase efficiencies. Under this targeting approach, host government Customs officials at remote ports complete the container examinations and electronically provide the results to CSI targeters at the regional hub. According to CBP host government officials, implementation of the regional hub is possible because of improvements in technology that allow for better and more timely transmission of image scans. Of the 13 countries with multiple CSI ports, 3 employ the regional hub model— England, France, and Italy. CBP officials reported that since implementing the regional hub model, CBP has reduced the number of CSI targeters in these 3 countries by 45 percent—from 20 in October 2011 to 11 as of April 2013. According to both CBP targeters stationed in England and their British counterparts, implementation of the regional hub model has not affected the quality or number of scans of U.S.-bound container shipments. Although implementation of the regional hub model increases efficiencies, CBP officials stated that they do not have plans to implement the regional hub model in other countries in the near future because of host country political and legal reasons. For example, CBP officials told us that CBP considered implementing the regional hub model in one country; however, the host government preferred to maintain the face-to-face interaction between the CSI targeters and their host government counterparts at each CSI port as a means to improve information exchanges and increase collaboration. Further, according to CBP and government officials in one country, a national law precludes the transmission of electronic scanned images other than to host government Customs officials. As a result, CSI targeters must be present at each CSI port in order to view the scanned container images. In-country CSI targeters. Where possible, CBP has shifted from the initial CSI targeting approach that was heavily dependent on the placement of targeters at foreign ports to an approach that takes advantage of improvements in technologies for transmitting image scans, as addressed earlier. Specifically, from fiscal years 2009 through 2012, CBP reduced the number of CSI targeters stationed at foreign ports by 50 percent—from 153 to 77. However, as noted above, CBP increased the number of CSI targeters stationed at the NTC-C during the same time period. CBP maintains in-country targeters in 20 of the 34 CSI countries. A key benefit of maintaining CSI targeters at these ports is the relationship built with host government counterparts. CSI targeters in all 6 foreign countries we visited and host government officials in 5 of the 6 countries we visited told us that personal relationships and trust that are established between CSI targeters and host country government officials from having the CSI targeters in country are fundamental to the success of the CSI program. In particular, the CSI targeters and host government officials in these 5 countries agree that the physical presence of CSI staff increases information sharing and improves collaboration. Further, host country Customs officials in 3 of the 6 countries we visited stated that the presence of CSI targeters contributed to the development or enhancement of their countries’ cargo targeting programs. According to our review of CBP performance data, changes in staffing levels in recent years have not negatively affected the effectiveness of the CSI program. In particular, CBP tracks two performance measures—(1) the percentage of U.S.-bound cargo container shipments that are reviewed by CSI targeters and (2) the percentage of U.S.-requested cargo examinations that are completed by host countries. According to CBP data from fiscal years 2009 through 2012, CSI targeters met their target goal of reviewing 100 percent of the U.S.-bound cargo shipments. Moreover, the percentage of U.S.-requested examinations of U.S.-bound cargo shipments completed by host countries increased from 93 percent in fiscal year 2009 to 98 percent in fiscal year 2012, although CBP did not meet the target goal of 100 percent. CBP reported that CSI relies on the voluntary cooperation of host nation Customs officials and that CBP works with the host ports to resolve examination issues as they arise in an effort to increase the percentage of U.S.-bound shipments that are examined. CBP has made efforts to expand the scope of CSI targeting beyond WMD, where possible, in an effort to increase the effectiveness of the CSI program. While the priority focus of CSI is to prevent WMD and other terrorist contraband from entering the United States through cargo containers, the April 2011 CSI Statement of Policy and Intent prioritized expanding the scope of CSI beyond WMD, among other things. In particular, according to the CSI Strategy Action Plan, as well as CSI program officials with whom we met, CBP is negotiating with government officials in foreign countries where CBP has CSI targeters to expand the focus of CSI’s targeting efforts beyond WMD to include other contraband, such as illicit drugs, illegal weapons, and counterfeit goods (intellectual property right violations). The CBP officials we met with noted, however, that expanding the scope of CSI targeting efforts beyond WMD is ultimately at the discretion of the host governments with whom CBP has negotiated guidelines for CSI program operations. While two of the six CSI countries that we visited allow CSI staff to target U.S.-bound cargo container shipments for contraband other than WMD, the remaining four countries generally limit targeting and examinations to cargo containers suspected of containing WMD. Government officials from one of these four countries stated it is CBP’s responsibility to scan containers for other suspected contraband, such as illicit drugs, once the containers arrive in the United States. Customs officials from another one of these four countries stated they do not have the resources to devote to scanning U.S.-bound containers that may be at risk for containing contraband other than WMD. According to CBP officials, though, expanding the scope of targeting at foreign ports by its CSI targeters has not resulted in additional costs to CBP in terms of numbers of targeters or funding. Reducing risks to the global maritime supply chain is critical because foreign ports and the cargo carried by vessels from these ports are vital to the U.S. economy. DHS has made progress in reducing some maritime supply chain risks through its various maritime container security programs. The Coast Guard has developed a port security risk model that it annually updates and uses to assess port facility security, inform operational decisions, and direct resources. In contrast, CBP has not assessed the risks of foreign ports that ship cargo to the United States to determine whether its existing CSI locations remain valid since 2005. Although there have been no known incidents of cargo containers being used to transport WMD, the maritime supply chain remains vulnerable to attacks. We recognize that it may not be possible to include all of the higher-risk ports in CSI because CSI requires the cooperation of sovereign foreign governments. However, DHS and GAO risk management practices state that agencies should periodically evaluate the effectiveness of their programs and that mechanisms should be in place for altering a program based on current risk data. Periodically assessing the risk level of cargo shipped from foreign ports and using the results of these risk assessments to inform any future expansion of CSI to additional locations as well as determining whether changes need to be made to existing CSI ports would help ensure that CBP is allocating its resources to provide the greatest possible coverage of high-risk cargo to best mitigate the risk of importing WMD or other terrorist contraband into the United States through the maritime supply chain. To better ensure the effectiveness of the CSI program, we recommend that the Secretary of Homeland Security direct the Commissioner of U.S. Customs and Border Protection to periodically assess the supply chain security risks from all foreign ports that ship cargo to the United States and use the results of these risk assessments to (1) inform any future expansion of CSI to additional locations and (2) determine whether changes need to be made to existing CSI ports and make adjustments as appropriate and feasible. In August 2013, we requested comments on a draft of this report from the Departments of Homeland Security and State. Both departments provided technical comments, which we have incorporated into the report, as appropriate. In addition to its technical comments, DHS provided an official letter for inclusion in the report, which can be seen in appendix II. In its letter, DHS stated it concurred with the recommendation and plans to develop a process for conducting periodic assessments of the supply chain security risks from all ports that ship cargo to the United States and use information from the assessments to determine if future expansion or adjustments to CSI locations are appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of State and Homeland Security, appropriate congressional committees, and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512- 9610 or caldwells@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff acknowledgments are provided in appendix III. This appendix provides information on the foreign ports that either participate directly in the Container Security Initiative (CSI) program or that U.S. Customs and Border Protection (CBP) otherwise coordinates with to review and secure U.S.-bound cargo container shipments. As of July 2013, CBP was coordinating targeting of U.S.-bound cargo container shipments with 61 foreign ports. Table 2 lists these ports according to the date the ports began conducting operations with CBP and also provides information on, among other things, the volume of U.S.-bound shipments passing through the seaport in fiscal year 2012 and the targeting approach employed. In addition to the contact named above, Christopher Conrad (Assistant Director), Josh Diosomito, and Paul Hobart made key contributions to this report. Also contributing to this report were Charles Bausell, Frances Cook, Stanley Kostyla, and Lara Miklozek. Combating Nuclear Smuggling: Megaports Initiative Faces Funding and Sustainability Challenges. GAO-13-37. Washington, D.C.: October 31, 2012. Supply Chain Security: CBP Needs to Conduct Regular Assessments of Its Cargo Targeting System. GAO-13-9. Washington, D.C.: October 25, 2012. Maritime Security: Progress and Challenges 10 Years after the Maritime Transportation Security Act. GAO-12-1009T. Washington, D.C.: September 11, 2012. Supply Chain Security: Container Security Programs Have Matured, but Uncertainty Persists over the Future of 100 Percent Scanning. GAO-12-422T. Washington, D.C.: February 7, 2012. Homeland Security: DHS Could Strengthen Acquisitions and Development of New Technologies. GAO-11-829T. Washington, D.C.: July 15, 2011. Maritime Security: Responses to Questions for the Record. GAO-11-140R. Washington, D.C.: October 22, 2010. Supply Chain Security: DHS Should Test and Evaluate Container Security Technologies Consistent with All Identified Operational Scenarios to Ensure the Technologies Will Function as Intended. GAO-10-887. Washington, D.C.: September 29, 2010. Supply Chain Security: CBP Has Made Progress in Assisting the Trade Industry in Implementing the New Importer Security Filing Requirements, but Some Challenges Remain. GAO-10-841. Washington, D.C.: September 10, 2010. Supply Chain Security: Feasibility and Cost-Benefit Analysis Would Assist DHS and Congress in Assessing and Implementing the Requirement to Scan 100 Percent of U.S.-Bound Containers. GAO-10-12. Washington, D.C.: October 30, 2009. Supply Chain Security: CBP Works with International Entities to Promote Global Customs Security Standards and Initiatives, but Challenges Remain. GAO-08-538. Washington, D.C.: August 15, 2008. Supply Chain Security: U.S. Customs and Border Protection Has Enhanced Its Partnership with Import Trade Sectors, but Challenges Remain in Verifying Security Practices. GAO-08-240. Washington, D.C.: April 25, 2008. Supply Chain Security: Examinations of High-Risk Cargo at Foreign Seaports Have Increased, but Improved Data Collection and Performance Measures Are Needed. GAO-08-187. Washington, D.C.: January 25, 2008. Cargo Container Inspections: Preliminary Observations on the Status of Efforts to Improve the Automated Targeting System. GAO-06-591T. Washington, D.C.: March 30, 2006. Container Security: A Flexible Staffing Model and Minimum Equipment Requirements Would Improve Overseas Targeting and Inspection Efforts. GAO-05-557. Washington, D.C.: April 26, 2005. Cargo Security: Partnership Program Grants Importers Reduced Scrutiny with Limited Assurance of Improved Security. GAO-05-404. Washington, D.C.: March 11, 2005. Container Security: Expansion of Key Customs Programs Will Require Greater Attention to Critical Success Factors. GAO-03-770. Washington, D.C.: July 25, 2003.
Foreign ports and the cargo carried by vessels from these ports are critical to the U.S. economy, but can be exploited by terrorists. Within DHS, CBP and the Coast Guard are responsible for maritime security. Through CSI, CBP identifies and examines U.S.-bound cargo that may conceal WMD, and through C-TPAT, CBP partners with international trade community members to secure the flow of U.S.-bound goods. Under the IPS program, Coast Guard officials visit foreign ports to assess compliance with security standards. GAO was asked to review DHS's maritime security programs. This report addresses (1) the extent to which DHS has assessed the foreign ports that pose the greatest risk to the global supply chain and focused its maritime container security programs to address those risks, and (2) actions DHS has taken to help ensure the efficiency and effectiveness of its maritime security programs. GAO analyzed DHS risk models and maritime security program strategies, met with program officials, and visited six foreign countries selected on the basis of participation in CSI, varied cargo shipment risk levels, and other factors. Department of Homeland Security (DHS) components have developed models to assess the risks of foreign ports and cargo, but not all components have applied risk management principles to assess whether maritime security programs cover the riskiest ports. The U.S. Coast Guard uses its risk model to inform operational decisions for its International Port Security (IPS) program and annually updates its assessment. In contrast, U.S. Customs and Border Protection (CBP) has not regularly assessed ports for risks to cargo under its Container Security Initiative (CSI) program. CBP's selection of the initial 23 CSI ports was primarily based on the volume of U.S.-bound containers, but beginning in 2003, CBP considered more threat information when it expanded the number of CSI ports. CBP has not assessed the risk posed by foreign ports that ship cargo to the United States for its CSI program since 2005. In 2009, CBP developed a model that ranked 356 potential expansion ports for a related program on the basis of risk, but it was never implemented because of budget cuts. By applying CBP's risk model to fiscal year 2012 cargo shipment data, GAO found that CSI did not have a presence at about half of the ports CBP considered high risk, and about one fifth of the existing CSI ports were at lower risk locations. Since the CSI program depends on cooperation from sovereign host countries, there are challenges to implementing CSI in new foreign locations, and CBP's negotiations with other countries have not always succeeded. For example, CBP officials said it is difficult to close CSI ports and open new ports because removing CSI from a country might negatively affect U.S. relations with the host government. However, periodically assessing the risk level of cargo shipped from foreign ports and using the results to inform any future expansion of CSI to additional locations, as well as determine whether changes need to be made to existing CSI ports, would help ensure that CBP is allocating its resources to provide the greatest possible coverage of high-risk cargo to best mitigate the risk of importing weapons of mass destruction (WMD) or other terrorist contraband into the United States through the maritime supply chain. DHS has taken steps to improve the efficiency and effectiveness of its maritime security programs, but faces host country political and legal constraints. The Coast Guard has implemented a risk-informed model that prioritizes the countries to visit and assist. Also, the Coast Guard and CBP have made arrangements with foreign government entities to mutually recognize inspections of each other's ports and maritime supply chains through the IPS and Customs-Trade Partnership Against Terrorism (C-TPAT) programs. CBP has also utilized technological improvements to target some U.S.-bound cargo shipments remotely from the United States to reduce CSI staff in foreign countries. However, CBP faces political and legal constraints in host countries. For example, according to CBP and government officials in one country, a national law precludes the transmission of electronic scanned images other than to host government Customs officials. As a result, CSI officials must be present at each CSI port in that country to view the scanned images. Further, in some ports, CBP has made efforts to expand the scope of its CSI targeting to include contraband other than WMD, but that is subject to approval by the host governments. GAO recommends that CBP periodically assess the supply chain security risks from foreign ports that ship cargo to the United States and use the results to inform any future expansion of CSI and determine whether changes need to be made to existing CSI ports. DHS concurred with GAO's recommendation.
The primary federal laws that govern how EPA regulates pesticides in the United States are FIFRA and the Federal Food, Drug, and Cosmetic Act (FFDCA). Under FIFRA implementing regulations, EPA is to review applications for pesticide products and register those that it determines will meet the FIFRA statutory standards for registration. If the use of a pesticide would result in a residue of the substance in or on food or animal feed, EPA may not register a pesticide under FIFRA unless it can determine that the residue is “safe” as defined by FFDCA. Under FFDCA, safe means that EPA has determined, among other things, that there is a reasonable certainty that no harm will result from aggregate exposure to the pesticide residue, including all anticipated dietary exposures and all other nonoccupational exposures for which there is reliable information. EPA may establish a tolerance level—the maximum permissible pesticide residue in or on food or animal feed that is sold—that meets the FFDCA safety standard or may choose to grant an exemption for a tolerance. OPP—the EPA office primarily responsible for regulating the use of pesticides—has regulatory staff in three divisions—Registration, Biopesticides and Pollution Prevention, and Antimicrobials—that are responsible for registering pesticides. The registration process formally begins when a registrant submits an application to OPP for a particular pesticide. This application is to include data to support the registration of the pesticide. In reviewing the application, OPP is to examine, among other things, the pesticide’s ingredients; the site or crop on which it is to be used; the amount, frequency, and timing of its use; and storage and disposal practices. OPP is also to review toxicity tests and studies showing how the pesticide affects human health and the environment. According to OPP officials, the length of time OPP takes from the initial review of an application to the final decision on whether to register a pesticide depends on many factors—including whether the pesticide being reviewed is similar to any pesticide EPA has previously reviewed— and, according to OPP officials, can take from 3 to 24 months. After OPP completes its review and approves the submitted package, EPA may register the pesticide without imposing requirements for additional data (unconditional registration) under FIFRA 3(c)(5) if EPA determines, among other things, that use of the pesticide in accordance with label directions will not have unreasonable adverse effects on the environment. Alternatively, FIFRA section 3(c)(7) allows EPA to grant a conditional registration for pesticides in the following circumstances: Identical/substantially similar pesticides (FIFRA Section 3(c)(7)(A)). EPA may conditionally approve an application for registration or an amended registration for a pesticide product if the agency determines that the pesticide and proposed use are identical or substantially similar to any currently registered pesticide and its uses, or differ only in ways that will not significantly increase the risk of unreasonable adverse effects on the environment; and approving the registration or amendment in the manner proposed would not significantly increase the risk of any unreasonable adverse effect on the environment. Each registration issued under 3(c)(7)(A) must submit or cite the same data that would be required for the unconditional registration of a similar product. New uses (FIFRA Section 3(c)(7)(B)). A current pesticide registration may be amended to allow additional uses, even if the data concerning the pesticide may be insufficient to support unconditional registration, if EPA determines that the applicant has submitted satisfactory data pertaining to the proposed additional use; and amending the registration would not significantly increase the risk of unreasonable adverse effect on the environment. Each registrant must submit or cite the same data that would be required for the unconditional registration of a similar product. New active ingredients (FIFRA Section 3(c)(7)(C)). A pesticide containing a new active ingredient not found in any currently registered pesticide can be conditionally registered for a period reasonably sufficient for the generation and submission of required data, if EPA determines insufficient time has elapsed since the imposition of the data requirement for those data to be developed and on the condition that when the agency receives such data that they do not meet or exceed risk criteria stated in the regulations issued under FIFRA and other conditions issued by the agency; the use of the pesticide during the period of the conditional registration will not cause unreasonable adverse effect on the environment; and the use of the pesticide is in the public interest. After a pesticide product is conditionally registered under FIFRA section (3)(c)(7), the registrant receives a notice indicating the terms of the conditional registration, including a list of any additional data that will need to be submitted and deadlines for submitting these data. Figure 1 summarizes the pesticide registration and tolerance setting process. According to EPA officials, once a pesticide is conditionally registered, EPA typically grants a period of time, generally 1 to 4 years, for the registrant to provide the required data. The registrant can ask EPA to waive the requirement for additional information or, according to EPA officials, extend the time frame. If the registrant does not submit the data specified within the required time frame, EPA can cancel the pesticide registration. Before a pesticide can be sold or distributed in the United States, it must be registered under FIFRA. At any time, EPA may initiate a suspension or cancellation proceeding for a pesticide registration if safety concerns develop. For example, EPA began proceedings to cancel some uses of Carbofuran—an insecticide and nematicide that was registered to control pests in soils and on leaves in a variety of field, fruit, and vegetable crops—after the agency determined that the dietary, worker, and ecological risks of this pesticide were unacceptable. Another check on the safety of registered pesticide products is the requirement in FIFRA section 6(a)(2) and FIFRA implementing regulations that registrants report adverse effects-related information to EPA. For example, registrants are required to submit certain toxicity information concerning the product both before and after registration, such as information on the product’s toxicity to nontarget plant species. In addition, as required by FIFRA,registered pesticide every 15 years to help ensure that each pesticide registration continues to satisfy the regulatory standard. In 2007, EPA EPA is to review the safety of each began conducting these reviews under its registration review program.As a part of this program, if EPA determines that additional data are needed to support the continued registration of a pesticide, the agency may issue a Data Call-In (DCI) notice, as authorized by FIFRA section 3(c)(2)(B), requiring the registrant to provide the data by a specific date. Also, at any time after a pesticide is registered, a registrant may apply to amend the registration and, according to OPP officials, such requests are reviewed as though the registrant is seeking approval for a new pesticide. While reviewing such requests, the agency may also issue a DCI notice requiring the registrant to provide additional data by a specific date. If a registrant fails to provide the data requested through a DCI, EPA may suspend the pesticide’s registration under authority of FIFRA section 3(c)(2)(B). The Pesticide Registration Improvement Act of 2003 (PRIA), Pub. L. No. 108-199, Div. G, Tit. V, § 501, 118 Stat. 419 (2004), amended FIFRA by, among other things, establishing pesticide registration fees for some registration actions. PRIA was reauthorized in 2007 (Pesticide Registration Improvement Renewal Act of 2007, Pub. L. No. 110-94, 121 Stat. 1000 (2007) or PRIA 2) and again in 2012 (Pesticide Registration Improvement Extension Act of 2012, Pub. L. No. 112-177, 126 Stat. 1327, or PRIA 3). registrations the agency had issued, whether registrants had submitted the additional data required by the registrations, and whether EPA had reviewed the data submitted. OPP provided the group with information from the OPPIN data system on the number of pesticide registrations that had been categorized as conditional. Subsequently, in September 2010, the environmental group raised concerns about EPA’s use of its conditional registration authority, and several other environmental groups and other interested parties supported this position. Among other things, the environmental group asserted that EPA had overused conditional registrations and did not appear to have a reliable tracking system to identify the status of conditionally registered pesticides to ensure that registrants submitted, and EPA reviewed, additional data in a timely manner. In addition, the group noted that the information EPA provided indicated that many pesticides have remained in conditional status for many years. For example, the information showed that over 3,200 pesticides had been in conditional status since 1995 (15 years) and that 2,100 pesticides had been in conditional status since 1990 (20 years). The number of active conditional registrations EPA has granted is unclear, according to OPP officials who, as a result of a 2011 EPA review, found the agency’s registration data to be inaccurate and the basis for granting some of these registrations to be inappropriately classified. Specifically, an internal review of OPP’s conditional registration program found that OPPIN does not allow officials to change a pesticide’s registration status from conditional to unconditional once the registrant has satisfied all data requirements, and the basis for many registration decisions was mischaracterized as conditional. In addition, based on the internal review, OPP officials concluded that several weaknesses contributed to this misclassification problem, including insufficient guidance and training, management oversight, and data management. As of July 2013, OPP officials told us that the office has taken or is planning to take several actions to more accurately account for conditional registrations. According to OPP officials, following an internal review of its conditional registration program that it completed in March 2011, OPP concluded that the OPPIN data on the number of conditional registrations were inaccurate. The internal review was conducted, in part, to determine the number of conditional registrations granted by EPA. According to information on OPP’s website, during the internal review, OPP determined that OPPIN contained 16,156 active pesticide registrations and that 11,205 (69 percent) of these pesticides were conditionally registered. However, OPP officials concluded, based on the internal review, that the data were inaccurate, and that the number of conditional registrations was overstated, for two reasons. First, once a registration is classified as conditional in OPPIN, its status in this data system cannot be changed from conditional to unconditional, when, for example, the registrant has satisfied all data requirements imposed. According to the internal review, OPPIN is an older system that was not designed specifically to track conditional registrations and thus is ill-suited for that purpose. To determine the current number of conditionally registered pesticides, OPP officials said detailed paper files that support each pesticide registration would need to be reviewed, which would be a very time-consuming process. OPP officials indicated that they plan to develop a new automated system for tracking conditional registrations and, in fiscal year 2013, they began using a portion of the registration maintenance fees collected annually to begin exploring the feasibility of However, these officials were uncertain implementing such a system.about the ultimate cost of this system, what sources they would use for any additional funding, and when the system would be operational. Second, as a result of the internal review, OPP found that its regulatory staff had incorrectly categorized the basis for many program actions in OPPIN as “conditional registrations” and that these incorrect categorizations resulted in an overcounting of conditional registrations. According to the internal review, OPP staff had used conditional registrations to describe a variety of actions that fall outside of the circumstances authorized by FIFRA Section 3(c)(7). For example, according to OPP officials and the internal review, OPP staff had assigned the category “conditional registration” to situations where approval of a registration is contingent upon the “condition” that the registrant makes a change that does not involve generating additional data. These situations included certain changes to pesticide product labels—such as strengthening precautionary statements—that are not specified by FIFRA section 3(c)(7), according to the results of the internal review. Similarly, OPP staff categorized as “conditional registrations” situations where the agency requested registrants to provide certain pesticide product-specific information—such as product chemistry studies related to storage stability that are used to determine label requirements—that do not fall under FIFRA Section 3(c)(7). The incorrect classification of actions as conditional registrations, according to the internal review, may leave the agency vulnerable to allegations by environmental, industry, and other stakeholders who assert that EPA inappropriately grants conditional registrations. However, according to OPP officials, all of the actions that were mistakenly categorized as conditional registrations were legitimate program actions that were lawful under other sections of FIFRA. We were unable to verify this assertion. Further, EPA still needs to take steps to correct these misclassifications in order to ensure the accuracy and integrity of its data and make clear the statutory basis for these program actions. Several weaknesses contributed to incorrect data entries into OPPIN. First, according to OPP officials, OPP regulatory staff did not have sufficient guidance or training to help them determine when a program action met the criteria for conditional registration. Second, according to OPP’s internal review, there was limited, organized management oversight to ensure that regulatory actions not subject to the narrow scope of section 3(c)(7) were not mischaracterized by OPP staff as conditional registrations. As a result, as the internal review stated, the actions that were classified as conditional registrations have varied across OPP’s three divisions and by individual entering data into OPPIN within each division. In addition, data management weaknesses contributed to the misclassification of registrations or other actions as conditional. For example, OPP officials said that OPPIN does not generate management reports of summary data that could have alerted managers to the excessive use of “conditional registration” due to the inaccurate classification of actions as conditional registrations by OPP staff. Under the federal standards for internal control, federal agencies are to employ internal control activities, such as management reviews at the functional or activity level, to help ensure that management’s directives are carried out and to determine if agencies are effectively and efficiently using resources. Without the ability to generate summary data on conditional registrations from OPPIN, OPP managers cannot easily conduct such reviews. However, they are still responsible for monitoring and ensuring the accuracy of conditional registration data. In light of the apparent widespread misclassification of regulatory actions as conditional registration, OPP, as part of its internal review of conditional registrations, analyzed data in OPPIN to, among other things, determine EPA’s historical use of conditional registrations, including how many of each of the three types of conditional registrations authorized by FIFRA section 3(c)(7) had been granted. The OPP official primarily responsible for conducting this analysis said the intent of the analysis was to show that (1) as noted on the OPP website, only a small portion of the conditional registrations granted by EPA were for new uses under section 3(c)(7)(B) or new active ingredients under section 3(c)(7)(C), as intended, and (2) most of the conditional registrations granted were for identical or substantially similar products under section 3(c)(7)(A). However, in reviewing this information and related documentation, we found that the information on the website was unclear, contained discrepancies, and used technical terms without defining them, which could lead to misinterpretation of the information. For example, the calculations presented on the website to support the conclusion that the overwhelming majority of actions identified in OPPIN as conditional registrations fall outside the circumstances authorized by FIFRA section 3(c)(7) incorrectly grouped conditional registrations for identical or substantially similar products authorized by section 3(c)(7)(A) with label amendments and other actions that fall outside the narrow scope of section 3(c)(7). After meeting with OPP officials in November 2012 to discuss the analysis, these officials acknowledged the website could be clearer and said that the website would be revised to clarify any confusing language and correct any inaccurate statements. However, OPP had no specific plan or time frame for doing so. As of July 2013, these clarifications and corrections had not been made. Accurate and reliable data are essential to an efficient and effective operating environment in the federal government. To more accurately report on the number of pesticide products that are conditionally registered, OPP officials told us that the office has taken or planned to take the following actions: Beginning in the fall of 2010, representatives of the OPP divisions that deal with pesticide registrations began meeting with OPP management at least quarterly to, among other things, review proposed conditional registrations for pesticide products with new active ingredients to ensure that (1) any new conditional registrations granted for these products meet the circumstances outlined in FIFRA Section 3(c)(7)(C); (2) the additional data that would be requested as a part of the conditional registration are really needed; and (3) if the data are needed, EPA is still able to make the determination that the information available concerning the pesticide demonstrates that the FIFRA safety standard will be met, which requires that the use of the product during the time needed to generate the necessary data will not cause unreasonable adverse effects on the environment. According to OPP officials, since they started these quarterly reviews, the number of conditional registrations granted for new active ingredients generally has dropped. For example, since starting these reviews, they have been able to preclude cases of misclassification of new active ingredient registrations as “conditional” that had been occurring in the past. Specifically, they noted that, prior to 2010, in some cases, OPP staff had classified some of these registrations as conditional when the additional data being requested of the registrant could only be generated after the date of registration, such as data measuring the storage stability of a commercially manufactured version of the newly registered pesticide product. According to OPP officials, they do not regard such data requirements as being within the scope of FIFRA section 3(c)(7). In 2012, OPP began revising the registration categories in OPPIN to, among other things, more accurately reflect those circumstances under which conditional registrations may be granted under FIFRA. As of May 2013, OPP officials said that they had completed development of the categories and provided training to their regulatory staff on how to correctly assign the new categories to each type of registration. In July 2013, OPP officials said they had completed implementation of the new codes in OPPIN. In addition to the training, OPP officials noted the training materials will be available online for regulatory staff to consult for guidance on an ongoing basis. In fiscal year 2013, OPP began using a portion of the maintenance fees it collects to begin development of an electronic tracking system for conditional registrations. As discussed, OPP officials are not certain what the total cost of the system will be or when the system will be ready for implementation. Table 1 summarizes the status, according to OPP officials, of key actions taken or planned by OPP to improve the reliability of conditional registration data. While OPP officials acknowledged the need to ensure that registrations are accurately classified to reflect their statutory basis and to develop an electronic data system for tracking the status of conditional registrations, they stated that EPA’s past practices for managing conditional registrations have not created additional risks to the environment and have been in compliance with applicable laws. Specifically, an EPA attorney stated that the agency views products conditionally registered as identical or substantially similar to currently registered pesticides under section 3(c)(7)(A) and as new uses of currently registered pesticides under section 3(c)(7)(B) as meeting the same safety standards as products registered “unconditionally” under section 3(c)(5). Therefore, according to this official, these products do not pose unreasonable adverse effects on human health or the environment. Further, a Deputy Director of OPP said section 3(c)(7)(A) and section 3(c)(7)(B) registrations make up the bulk of conditionally registered pesticides. In addition, OPP officials stressed that the program actions that were mischaracterized as conditional registrations were nevertheless legitimate program actions. Specifically, these officials and OPP’s website note that most of these actions were taken pursuant to the authority of FIFRA implementing regulations and should have been identified as such. Moreover, the EPA attorney stated that FIFRA does not require EPA to convert a registration from “conditional” to “unconditional” when all additional data requirements have been satisfied. So, according to this official, the fact that EPA has not done so in OPPIN in the past does not raise a legal issue. However, despite these assertions, EPA should still take steps to ensure that its data on the registration status of individual pesticide products are current and accurate. The extent to which EPA ensures that registrants submit the additional data required when it grants conditional registrations or that it has reviewed these data is unknown. In particular, OPP does not have a reliable system, such as an automated data system, designed specifically to track key information related to conditional registrations, including whether registrants submitted additional data within required time frames and OPP reviewed these data. OPP officials acknowledged this lack of a comprehensive means to track the status of conditional registrations, and noted, as discussed, their intention to develop such a system. However, these officials, as well as OPP’s website, note that the conditions for most of these registrations have likely been satisfied as a result of routine program operations that, in the OPP officials’ view, constitute a quality assurance check. While these program operations may help to identify some situations where required data are missing, they fall short of what is needed because they are neither comprehensive nor do they ensure the timely submission of these data. This is a key reason that OPP officials are currently conducting a manual review of the files of the more than 16,000 active pesticide registrations OPP has issued, including conditional registrations, to identify any missing data, misclassified registrations, or other problems. OPP lacks a reliable system specifically to track the status of conditional registrations to ensure that additional required data are submitted and timely, and that OPP reviews these data. As discussed, EPA currently “tracks” conditional registrations in OPPIN, an older data system that was not designed for this purpose and that does not have, among other things, the capability to flag situations in which required data have not been submitted by registrants or reviewed by OPP. Federal internal control standards require, in part, that information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their internal control and other responsibilities. Thus, for an entity to run and control its operations, it must obtain, maintain, and use relevant, reliable, and timely information for program oversight and decision making. Furthermore, the Office of Management and Budget (OMB) directs agency managers to take timely and effective action to correct internal control weaknesses. As measured against this internal control standard, EPA’s lack of a reliable and comprehensive means of routinely collecting and tracking information on conditional registrations, including the status of registrants’ submission of required data and OPP’s review of these data, constitutes an internal control weakness and leaves OPP without an important management tool. For example, when registrants miss due dates without applying for waivers or extensions, it is difficult for OPP without a reliable tracking system to identify these cases for priority follow-up and notify the registrants that their pesticide registrations could be cancelled. In addition, without a reliable tracking system, OPP may miss conditional pesticides where, had the additional required data been submitted and reviewed, OPP might have altered the terms of a registration. OPP officials acknowledged there have been cases in which their consideration of these additional data led them to make minor changes to a registration, although they could not recall a case where these additional data prompted them to cancel a registration. These officials emphasized, and OPP documents state, that in issuing a conditional registration, even though OPP may ask the registrant for additional data, OPP has determined that the pesticide when used in accordance with labeling and common practices will not cause unreasonable adverse effects on the environment, and that OPP’s registration decision takes into account the economic, social, and environmental costs and benefits of the use of that pesticide. Nevertheless, without the ability to systematically track conditional registrations, OPP is not well-positioned to produce summary data to enable it to easily identify situations for priority follow-up; enforce FIFRA and its implementing regulations; and report to Congress and others on program status. For example, without this tracking, it is more difficult to identify patterns of potential problems for management attention, such as registrants that are repeatedly late in providing additional required data for their conditional registrations, which could be the basis for canceling these registrations. OPP’s problems with data management have been well-documented over the years. GAO studies dating back to 1980, 1986, 1991 and 1992 noted problems with OPP data systems used to track the status of pesticide registrations. For example, the 1986 study found that OPP did not have a data system for monitoring whether registrants were submitting the data required by conditional registrations and could only determine the status of data submissions by performing a time-consuming manual file search. At the time, we recommended that OPP take steps to review outstanding conditional registrations of new active ingredients and determine what progress is being made by registrants to submit the required data and take appropriate action. In response to our recommendation, OPP said it was developing a new automated system to track all outstanding data requirements. However, as discussed, OPP does not currently have such a system. The 1992 study noted that, after having spent $14 million over 3 years in data systems development, OPP could not easily assemble accurate, reliable, and complete information on pesticides subject to reregistration (now registration review). EPA Inspector General studies in 1994 and 2000 noted that OPP had not completed actions to improve information systems that contain inaccurate, incomplete, and duplicate data or that are not integrated. In addition, a 2007 EPA contractor study found that many of these problems persist, especially with OPPIN. Noting that OPPIN was launched in 2000, this study found that this system had failed to meet the needs of OPP staff and that many of these staff had created “one-off” (off-line) tracking systems in order to get their jobs done, making comprehensive, reliable status updates, such as whether required data had been submitted and reviewed, very difficult to retrieve. The study also reported that OPPIN lacks the needed data fields and reporting functions for detailed tracking of the status of pesticide registrations, and that multiple OPP staff had expressed dissatisfaction with OPPIN, stating that it is not user-friendly, data are not current or complete, and it lacks a “report card” function to easily check the status of pesticide registrations, including reregistration status. Although the contractor study noted that OPPIN was to be retired in September 2008, we found, as discussed, that OPP is still using OPPIN despite its many limitations. OPP officials said, and the internal review states, that the conditions for most conditional registrations have likely been satisfied as a result of routine program operations that, in their view, constitute a quality assurance check. However, these operations fall short of what is needed for quality assurance because they are neither comprehensive nor do they ensure the timely submission of the additional data required as a condition of the registration. The program operations mentioned by OPP include the following: good faith submissions made by registrants to satisfy additional data record keeping and targeted follow-up done by pesticide product managers; state pesticide registration actions that may bring to light missing data required by the OPP’s conditional registration of a pesticide; missing data identified as part of OPP’s periodic reevaluation of registrant-initiated actions, such as label change amendments, that bring to light missing data associated with an earlier conditional registration. The program operations OPP officials identified may help identify some situations where required data are missing, but they each have limitations and fall short of what is needed, as follows: Registrant submissions: While undoubtedly many registrants are conscientious about their timely submission of additional data required by their conditional registrations, OPP has found cases in the past where required data were not submitted or were submitted late. As discussed, once OPP issues a conditional registration, the registrant can move the associated pesticide product into the marketplace. In that sense, the registrant’s commercialization of that product is not contingent on the registrant’s submission of the additional required data. In addition, because OPPIN does not have the ability to systematically flag missing or late data, some registrants may be emboldened to delay or give less priority to developing and submitting these additional data. Product manager actions: The record keeping and targeted follow-up done by OPP product managers also have limitations. According to OPP officials, each of OPP’s 20 product managers is responsible for tracking about 800 of the more than 16,000 active pesticide registrations maintained by OPP. While OPP officials acknowledged that each manager has a very broad span of control, there are other OPP regulatory staff who assist these managers. Moreover, of the approximately 800 registrations handled by each manager, conditional registrations constitute a subset, particularly for new active ingredients. However, we found that OPP had not provided written guidance to product managers on how to track the status of the pesticide registrations for which they are responsible. As a result, according to OPP officials, product managers use a variety of methods to track this information, including electronic spreadsheets or reminder notices, handwritten notes, and memory. Without OPP guidance on how product managers should maintain their pesticide registration files, in the case of the retirement or resignation of an experienced manager—or even the extended absence of a manager due to illness—other managers asked to replace or fill in for this manager may not be familiar with how he or she maintained files or data, or the extent to which this official relied on memory versus written records. Furthermore, requiring all product managers to track the status of registrations in a consistent, electronic format would help OPP meet the goals of an August 2012 OMB directive that, among other things, directs executive agencies to the fullest extent possible to eliminate paper and use electronic record keeping to ensure transparency, efficiency, and accountability. The directive is applicable to all executive agencies and all records. State registration: While state pesticides registration activities may help to bring to light missing data associated with conditional registrations issued by OPP, the extent to which this happens is unknown and should not be relied upon as a quality assurance check. After OPP registers a pesticide, states also can register that pesticide under specific state pesticide registration laws. A state may be more stringent in registering a pesticide for use in that state, but its registration requirements generally may not be less stringent than the federal requirements. In addition, states generally have primary responsibility (known as “primacy”) for enforcement of the proper use of pesticides within their borders. Periodic review of registered pesticides: As discussed, FIFRA requires that EPA periodically reevaluate registered pesticides to ensure that each registration continues to satisfy regulatory standards. EPA originally did this reevaluation under its reregistration program, applicable to pesticides registered prior to November 1984. More recently, this reevaluation is being done under the agency’s registration review program. According to OPP officials, this periodic review of previously registered pesticides provides an opportunity to identify missing data required by a conditional registration. For example, these officials said any missing data related to section 3(c)(7)(A) (identical or similar) or (B) (new uses) can be identified through registration review. They explained that registrations under these sections do not impose new data requirements. Instead, these registrations are issued when there is an outstanding DCI, or planned DCI, for an identical or similar currently registered pesticide. Thus, according to OPP officials, because registrations under these sections are linked to DCIs, and OPP’s Pesticide Registration Information System (PRISM) is used to track DCIs, they are confident in their ability to ensure the timely submission of required data. They noted that PRISM is a newer and more robust data system than OPPIN. They also noted that (1) most conditional registrations are made under sections 3(c)(7)(A) or (B), (2) the associated DCIs place a legal obligation on the registrant to provide the requested data, and (3) OPP may suspend a registration for failure to respond to a DCI. However, we note that PRISM has apparent limitations as well. For example, OPP officials said that PRISM is not designed, per se, to track conditional registrations and therefore cannot be used to identify the conditional registrations, if any, associated with a particular DCI. In addition, they said while this system is useful for tracking the status of DCIs on a case-by-case basis, PRISM lacks the capability to produce summary reports for management attention that could indicate, for example, the extent to which registrants are meeting the requirement to provide an initial response to a DCI within 90 days. Furthermore, as acknowledged by OPP officials, registration review is not helpful in tracking the status of conditional registrations made for new active ingredients under FIFRA section 3(c)(7)(C). For these registrations, there is no relationship to an identical or similar, currently registered pesticide. As a result, the potential exists that a conditionally registered pesticide under this section would continue to be sold and used in the United States for a number of years before OPP discovered that the additional data required had not been submitted and were late. In reviewing OPP documents related to the conditional registration of new active ingredients done in the early 2000s, we noted such cases. In each case, when the associated pesticide product came up for registration review, OPP determined that some of the required data related to the original conditional registration had not been submitted and were late or, if submitted, had not been reviewed by OPP. For example, in the case of a pesticide product containing the active ingredient Foramsulfuron, conditionally registered in November 2002, two required studies on the effects of this pesticide on terrestrial and aquatic plants that were due in December 2004 had not been submitted 10 years after the conditional registration was issued, as determined by OPP’s registration review of this pesticide in 2012. In another case, involving a pesticide product containing the active ingredient Acetamiprid, conditionally registered in March 2002, OPP discovered during its registration review of this pesticide in 2012, about 10 years later, that it had received, but not reviewed, a study related to the effects of this pesticide on honeybees. OPP documents indicate the registrant submitted this study in 2001, even before OPP granted the conditional registration. Acetamiprid belongs to a class of pesticides called neonicotinoids that some beekeepers, environmental groups, and others suspect of having adverse effects on honeybees. Registrant-initiated actions: According to OPP officials, if a registrant applies to amend a registration, such as to make a label change, this action triggers an OPP review of the data supporting the pesticide registration and provides an opportunity to identify missing data associated with a conditional registration. If it is determined that data are missing, the agency may issue a DCI requiring the registrant to provide the data by a specific date. However, this mechanism to identify missing data is ad hoc and only applies to cases in which a registrant seeks to amend its registration. In addition, OPP officials pointed to an analysis they performed as part of OPP’s 2011 internal review that showed registrants usually meet the additional data requirements associated with their conditional registrations on a timely basis. Specifically, OPP examined 544 conditional registrations that it granted (1) for new uses or new active ingredients under FIFRA sections (3(c)(7)(B) or (C), respectively, and (2) from March 2004 to September 2010. According to OPP officials and the office’s website, these conditional registrations were selected because, as relatively recent registrations, they were the most likely candidates to be missing additional required data. In contrast, according to OPP officials, older conditional registrations were less likely to be missing data because of the application of the cited routine program operations. To do this analysis, OPP officials said they first identified all additional data requirements and associated due dates for these conditional registrations, and then manually reviewed the files for these pesticides to determine if the required data were submitted and due dates met. When finished, OPP concluded that registrants had completed 96 percent of “all actions intended” for these 544 conditional registrations in a timely manner. OPP posted this information on its website. However, in reviewing the information on the website, as well as supporting documents provided by OPP, we were unable to verify these calculations, in part because OPP was unable to locate some of the supporting documentation. In addition, some of the documentation provided did not always make clear whether the submitted data had been timely or reviewed by OPP. Furthermore, we found that some of the statements on the website were confusing; used technical terms such as “registration,” “action,” and “decision” without defining the terms; and contained other discrepancies. Some of these problems have been cited by legal and environmental groups who found this information confusing as well. Although OPP officials generally agreed that revisions were needed to the discussion on the website for clarity, OPP had no specific plan or time frame for doing so. Regardless of any clarifications needed to the website, an OPP Deputy Director expressed confidence in the results of this analysis. As of July 2013, the discussion of this analysis on the website had not been clarified. In May 2012, OPP staff began manually reviewing the files for the more than 16,000 pesticide registrations granted by EPA. The purposes of this review include identifying all outstanding data requirements, as well as cases where the registration action, such as a label change amendment, was mischaracterized as a “conditional registration.” According to OPP officials, all prior registrations are being reviewed, not just those classified as conditional, because pesticide registrations may have long histories, and even though a registration may have been classified initially as unconditional, OPP may have imposed additional data requirements at a later time. According to OPP officials, reviewing each pesticide registration file is time-consuming and, depending on the pesticide, may take from a few hours to a few days to complete. This generally includes OPP regulatory staff reviewing voluminous paper files associated with many of these pesticide registrations. According to these officials, once this review process is completed, OPP will have a “clean” set of data that will, among other things, identify (1) any missing data, (2) missed deadlines for registrants submitting these data, and (3) cases where the registration action was mischaracterized, including the misuse of “conditional registration.” These officials said the review results are being recorded in an electronic spreadsheet, known as the “master file,” for future use. For example, when a pesticide comes up for registration review, OPP officials said that staff will refer to the master file to identify any missing data that should be included in the DCI resulting from that review. Although OPP’s review of prior pesticide registrations remains a work in progress, OPP provided an excerpt from its electronic spreadsheet showing the results for three pesticides that we asked about because other agency documents we reviewed suggested possible registration issues. For these pesticides, the spreadsheet indicated that most, but not all, of the required data had been submitted by registrants; some of these data were submitted from 2 to 12 months after the related due dates and, for one of these pesticides, no due date was specified in the registration notice, making it impossible to determine if the data were submitted on- time. In this last case, OPP officials stated that the lack of due dates in the registration notice was an unintentional oversight. Since OPP has not finished its review of all prior registrations, it has not yet developed summary statistics on the frequency with which these types of problems were found. OPP officials said they plan to complete their manual review of prior registrations by the fall of 2013, but its completion by then will depend on the amount of time OPP staff can devote to this review relative to their other responsibilities. In addition, according to agency officials, by the end of calendar year 2013, they plan to make public information from the review about the number of active ingredients approved under FIFRA section 3(c)(7)(C) for which data are overdue, and those for which data were submitted late. While OPP’s manual review of existing registrations may result in a clean data set and identify some missing data and other problems not discovered as a result of what OPP calls routine program operations, it is an interim measure. Among other things, OPP officials said their office needs a comprehensive automated data system for tracking conditional registrations. As noted by one OPP division director, “no one wants to have to track this information by hand” in the future. In addition, according to these officials, OPP does not plan to update the master file to include new pesticide registrations or other registration changes that occur in the future. They noted that the master file is retrospective, and it provides a snapshot in time. Instead, OPP officials said that new pesticide registrations and other registration changes will be entered into OPPIN using the new codes that OPPIN plans to introduce in June 2013. In July 2013, EPA advised us that the codes have been entered in OPPIN. According to these officials, the introduction of these new codes, in conjunction with staff training on how to use them, should preclude some of the misclassification problems experienced in the past until a new comprehensive data system is available to replace OPPIN. The 24 stakeholders that responded to our questionnaire—including representatives of consumer (3), environmental (6), industry (5), legal (3), producer (1), science (1), and state government groups (5)—generally indicated that EPA needs to improve its conditional registration process and, in some cases, they offered suggestions for improving this The issues that stakeholders raised included concerns about process.the timely submission and review of required data and the misuse of the conditional registration designation. In addition, stakeholders’ views varied regarding the potential benefits and problems associated with the conditional registration of pesticides. In responding to our questionnaire, respondents in the consumer, environmental, industry, legal, science, and state government stakeholder groups generally reported concerns with submission or review of required data as follows: Of the 19 respondents in the consumer, environmental, legal, producer, science, and state government groups, 17 reported concerns related to registrants not submitting additional required data on time, including concerns about pesticides that remain in the marketplace when their environmental and health impacts have not been fully evaluated. In addition, 3 of the 8 respondents from consumer groups and state government were generally concerned that, when registrants are allowed to miss due dates without any follow-up from EPA, there is little incentive for the registrants to submit the additional data and take the data requirements seriously. Further, 4 of the 17 respondents from consumer, environmental, legal, and state government groups reported that EPA should cancel registrations for those registrants who do not submit required data on time. Of the 19 respondents from environmental, industry, legal, and state government groups, 8 reported concerns about EPA’s record keeping related to conditional registrations, including the agency’s ability to ensure the receipt and review of required data. Specifically, one industry stakeholder stated that EPA does not effectively track the receipt and review of required data and said, in particular, EPA does not always acknowledge receipt of required information and does not always notify recipients whether the data submitted satisfied the condition. However, 4 of the 5 respondents from industry stated the public faced little risk when required data are not submitted or reviewed, pointing out that before EPA conditionally registers a pesticide, the agency must determine that the pesticide meets FIFRA registration standards. Stakeholders representing environmental, industry, and state government groups generally reported concern with EPA issuing conditional registrations for circumstances that were outside of the permissible situations stated in FIFRA. Examples are as follows: Of the 16 respondents representing these groups, 10 reported concern that EPA was overusing conditional registrations. Three of the 5 environmental stakeholders that had these concerns generally stated that conditional registrations were originally intended to be used in limited circumstances where a public need was established— such as the need to quickly approve the use of a pesticide to prevent significant crop damage and economic loss—and that EPA’s current practices for issuing conditional registrations are not in keeping with this original intent. Of the 5 respondents from industry, 3 stated that there are cases when EPA grants a conditional registration for a pesticide that should have qualified for an unconditional registration. These stakeholders reported that, in some of these cases, EPA had granted a conditional registration for reasons that were outside of those permissible circumstances outlined in FIFRA, such as the need for labeling changes and potential data requirements that could be imposed in the future as a result of the registration review process. Of these three industry stakeholders, two were aware of EPA’s recent efforts to ensure that conditional registrations are granted only in appropriate circumstances, as described on the agency’s website, and these stakeholders were supportive of these efforts. In keeping with these concerns, respondents from environmental, industry, producer, and state government groups generally offered suggestions for improving EPA’s conditional registration process. Following are examples: Of the 16 respondents representing environmental, industry, and state government groups, 7 generally stated that EPA needs a better system for tracking the status of conditional registrations, including its review of required data. Of the 10 respondents representing industry and state government groups, 6 stated that it would be helpful if EPA developed a way to share information about conditional registrations with external stakeholders. For example, a respondent from a state government group suggested that EPA create a notification and tracking system for states that specifically lists the status of conditional registrations and any pending data requirements. The stakeholder stated that this system would facilitate the exchange of information between EPA and the states. Also, a respondent representing an industry group stated that EPA should develop a system similar to one used by California’s Department of Pesticide Regulation; the respondent stated that this system allows registrants to access information on the status of their pesticide registration applications pending before that state and about their conditionally registered products. Of the 5 industry respondents, 2 suggested that EPA address concerns about overuse of conditional registrations by taking steps to ensure that these registrations only are used according to the explicit criteria set forth in FIFRA section 3(c)(7). Of the 17 respondents representing consumer, industry, legal, producer, and state government groups, 14 stated that there were significant benefits in conditionally registering pesticides. They generally noted that the conditional registration process is an important and effective mechanism that gives EPA the flexibility to allow pesticides to move to the market more quickly, and that quicker movement to the market, in turn, provides users and growers with faster access to the pesticides they need. For example, one respondent commented that conditional registrations are especially valuable in situations where users have a much more limited choice of pesticides that meet their needs, such as growers of specialty crops. Seven of these 14 respondents also generally stated that conditional registrations promote innovation by bringing new technologies and products to the marketplace faster. In contrast, 13 of the 18 respondents representing consumer, environmental, legal, science, and state government groups stated that there were numerous negative impacts caused by conditionally registering pesticides. For example, all 13 of the respondents concerned with negative impacts reported that conditionally registering pesticides can delay EPA’s ability to mitigate public health and environmental impacts caused by pesticides. These respondents generally stated that EPA is not conducting a full, rigorous review of conditionally registered pesticides and therefore is allowing these pesticides into the marketplace without complete data, such as toxicity tests and studies that demonstrate the pesticides’ impact on the environment. Eight of these 13 respondents who expressed concern with conditionally registering a new use of a pesticide generally noted that, without a full, rigorous review, EPA may miss problems caused by the new use that may not have occurred with the original use.in the cases that involve pesticides with new active ingredients, a conditional registration should only be granted if a critical need for the pesticide can be demonstrated. Six of the 13 respondents stated that the risks posed by conditionally registering new active ingredients were so great that EPA should discontinue this type of conditional registration. In Four of the 13 respondents also stated that, especially elaborating on their concerns, 12 of the 18 respondents from consumer, environmental, legal, science, and state government groups cited examples of conditionally registered pesticides that, in their opinion, should not have been conditionally registered. The three pesticide products mentioned most frequently by these respondents were the following: Background: According to EPA documents, in 2003, EPA conditionally registered the insecticide Clothianidin, which the agency had identified as an alternative to older, more toxic insecticides. As one of the conditions of the registration, EPA required the manufacturer, Bayer CropScience, to submit a study evaluating the effects on honeybees of prolonged exposure to Clothianidin. In 2007, the agency reviewed this study and determined that it satisfied EPA’s field study guidelines. However, in 2010 and again in 2012, numerous entities, including consumer and environmental groups, petitioned EPA to discontinue use of Clothianidin, charging, among other things, that it posed an imminent hazard to honeybees. In 2010, EPA decided to reevaluate the study and ultimately determined that there were some deficiencies in the study but that the registered uses of Clothianidin met the FIFRA standard for registration. According to Bayer CropScience, the use of Clothianidin was necessary to prevent crop damage from pesticide-resistant pests and the dying of honeybees in 2008, which environmental groups claimed was caused by Clothianidin, was actually a result of a variety of factors, including incorrect application of the pesticide. Stakeholder comments: Half (9 of 18) of the respondents representing consumer, environmental, legal, science, and state government groups raised concerns about the insecticide Clothianidin. For example, of the 9 respondents who raised concerns, 6 generally reported that EPA should not have conditionally registered this pesticide in 2003 without all the data needed to establish that the pesticide would not significantly increase unreasonable adverse effects to pollinators, including honeybees. Two of these 6 respondents reported that the failure to do so has allowed the widespread use of a pesticide that, in their view, has caused the death of honeybee colonies and irreparably damaged the environment and livelihoods of beekeepers. Four of the 6 respondents said that it is irresponsible for EPA to refuse to discontinue the registration of this product when the agency eventually determined that the pollinator study submitted by the company was inadequate. Background: According to EPA documents, in August 2010, EPA conditionally registered the active ingredient of the pesticide Imprelis. This pesticide, manufactured by DuPont, was a low-toxicity herbicide used to control weeds, vines, and grasses on nonfood use sites, such as weeds around an office building. EPA stated that the studies originally submitted for Imprelis were adequate to make a finding for the registration but also concluded that two additional studies (on toxicity and reproduction) were required to confirm the conclusions from existing data. However, in the summer of 2011, EPA received reports from several states that this pesticide may have caused injury to certain species of evergreen trees, particularly Norway spruce and white pine. In a June 2011 letter, DuPont cautioned professional applicators not to use Imperils near certain species of trees, including Norway spruce and white pine. On August 4, 2011, DuPont voluntarily suspended sales of Imprelis and, on August 11, 2011, EPA issued a stop-sale order directing DuPont to immediately halt the sale, use, or distribution of Imprelis. According to EPA, it issued this stop-sale order because it had reason to believe the product was misbranded and the agency had obtained new information, not available during the registration process, that showed Imprelis was toxic to certain trees. Currently, EPA is evaluating the tree damage to determine what caused the injuries. DuPont has since started a return and refund program for Imprelis users. Stakeholder comments: Of the 7 respondents representing environmental and science groups, 4 raised concerns about the pesticide Imprelis. According to 3 of these 4 stakeholders, EPA did not properly consider the “unreasonable effects on the environment” of this pesticide, specifically effects on organisms that this pesticide was not intended kill, including trees. One stakeholder reported that the experience with this pesticide illustrates what can happen when EPA allows a pesticide product to proceed to the marketplace without complete information and, in this stakeholder’s opinion, confirms that the conditional registration process actually allows EPA to bypass statutory safeguards and rush pesticides with unknown and unevaluated risks to market. GAO, Nanotechnology: Nanomaterials Are Widely Used in Commerce, but EPA Faces Challenges in Regulating Risk, GAO-10-549 (Washington, D.C.: May 25, 2010). appropriate because (1) the company had insufficient time to generate certain required data, (2) use of the pesticide is in the public interest, and (3) use of the pesticide during the period needed to generate and review the required data will not cause unreasonable adverse effects. The registration of this product is being challenged in a lawsuit by the Natural Resources Defense Council. EPA Office of the Inspector General, EPA Needs to Manage Nanomaterial Risks More Effectively, Report no. 12-P-0162, Dec. 29, 2011. OPP officials noted that the Office of the Inspector General advised EPA to develop a better internal process for sharing data across program offices. According to these officials, EPA has implemented this recommendation, and an Inspector General official concurred. reviewing applications submitted each year for new or amended registrations. The pesticide products that the agency registers play a critical role in food production by helping to minimize crop losses due to pests and weeds. In addition, pesticide products have helped improve public health by controlling disease-carrying pests, such as insects and rodents. At the same time, consumers rely on OPP to ensure that registered pesticide products do not cause unreasonable adverse effects on the environment or human health when used according to the label instructions approved by the agency. OPP faces challenges in tracking key information specifically related to conditional registrations, and, as a result, is unable to produce accurate information on the current number of these registrations. OPP’s lack of a reliable and comprehensive means of routinely collecting and tracking information on conditional registrations, including the status of registrants’ submission of required data and OPP’s review of these data, leaves it without an important management tool. Because OPP is not systematically tracking whether registrants of conditionally registered pesticides submitted additional required data, and whether OPP reviewed these data, it may not be able to identify situations in which the additional data would suggest the need to alter a registration. Furthermore, without the ability to systematically track conditional registrations, OPP is not well-positioned to produce summary data to enable it to easily identify situations for priority follow-up; enforce FIFRA and its implementing regulations; and report to Congress and others on program status. Furthermore, OPP’s use of conditional registrations for actions other than those that meet the criteria outlined in FIFRA Section 3(c)(7) has created confusion for its staff, and may leave OPP vulnerable to charges by environmental, industry, and other stakeholders who assert that it inappropriately grants conditional registrations. According to OPP officials and documents, weaknesses in guidance and training, management oversight, and data management contributed to the misclassification of other pesticide-related activities as conditional registrations. OPP has raised the visibility of these issues by holding meetings at least quarterly with representatives of the OPP divisions that issue pesticide registrations to discuss any conditional registrations being considered for new active ingredients to ensure they meet the statutory criteria outlined in FIFRA Section 3(c)(7) and that the additional information requested is indeed needed to unconditionally register the pesticide. In addition, OPP has taken or plans to take other actions to ensure that staff appropriately grant conditional registrations and registrants submit required data; however, these are short-term solutions that do not fully address the problems identified. As OPP officials and stakeholders recognize, OPP needs a comprehensive automated data system to track conditional registrations from the time the conditional registration is granted until additional data requested is received and reviewed. However, OPP previously stated over 25 years ago that it planned to develop an automated system for tracking conditional registrations of new active ingredients, but it did not follow through with this plan. OPP has secured funding through FIFRA amendments to begin the development of such a system, but much work remains to be done and will depend on a further commitment of needed resources. Moreover, OPP lacks written guidance or a consistent methodology for how product managers are to maintain their pesticide registration files. Allowing product managers to use disparate methods to collect and keep information on the pesticides they are responsible for makes it more difficult to develop summary information about the status of pesticide registrations overall, which could be useful information for managing the pesticide registration program. Also, as the extensive amount of time and effort needed in OPP’s ongoing review of all registered pesticides demonstrates, product managers’ current methods are not sufficient to efficiently track this information. It will take time and money to develop an automated system for tracking the progress of conditionally registered pesticides but, in the interim, OPP is further hampered in its efforts to be informed about the status of conditional registrations because product managers do not use a consistent system for tracking their status, including when data are submitted by registrants and reviewed by OPP. Furthermore, OPP’s reliance on the institutional knowledge of its product managers and the records they keep is problematic for other reasons, including the loss of knowledge and experience of these employees as they retire or are replaced by new employees. Finally, OPP has also not clearly and concisely communicated the results of its analyses of conditional registrations on its website. It is important for agencies to ensure that information placed on their websites is accurate and free of discrepancies. OPP officials have acknowledged the need to correct the agency’s website on conditional registrations and remove any confusing or inaccurate statements. To improve EPA’s management of the conditional registration process, we recommend that the Administrator of EPA direct the Director of the Office of Pesticide Programs to take the following three actions: Complete plans to automate data related to conditional registrations to more readily track the status of these registrations and related registrant and agency actions and identify potential problems requiring management attention, Pending development of an automated data system for tracking the status of conditional registrations, develop guidance to ensure that product managers use a uniform methodology to track and document this information, including when data are submitted by registrants and reviewed by EPA, in the files maintained by each pesticide product manager. Review and correct, as appropriate, OPP’s website on conditional registrations to ensure that the information presented is clear, concise, and accurate, including defining technical terms. We provided a draft of this report to EPA for review and comment. In written comments, which are included in appendix II, EPA agreed with the report’s recommendations. Regarding the first recommendation, EPA said that its implementation plan to automate data related to conditional registrations includes (1) development of new codes for identifying conditional registration decisions in OPPIN; (2) training its staff on use of the new categories represented by these new codes, and making the training available online for guidance; and (3) changes to its databases to allow staff to check more easily whether there are any outstanding requests for data on any pesticide active ingredients. EPA also said it plans to develop a more comprehensive system for tracking conditional registrations; however, the agency’s ability to do so depends on the availability of funding and the complexity of incorporating changes in the databases. Regarding the second recommendation, EPA said it is developing a standard operating procedure for staff to follow when entering data into the computerized tracking system about the statutory basis for registration decisions. According to EPA, this procedure, together with the new training for staff, should ensure that conditional registration decisions are properly identified in the OPPIN database going forward. The agency said it expects to complete the procedure by the end of calendar year 2013. EPA added that the status of products previously approved under conditional registration authority is also being reviewed and updated, as necessary. For the third recommendation, EPA said by the end of 2013 it will revise its website on conditional registration to both clarify and update the information presented. In addition, EPA indicated the website will include an outline of ongoing agency work to strengthen its conditional pesticide registration program. EPA also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we will plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Administrator of EPA, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or gomezj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to examine the (1) number of conditional pesticide registrations the Environmental Protection Agency (EPA) has granted and the basis for granting these registrations; (2) extent to which EPA ensures that registrants submit the additional data EPA required as part of conditional registrations and reviews these data; and (3) views of relevant stakeholders on EPA’s use of conditional registrations, including ways, if any, to improve the conditional registration process. To address these objectives, we reviewed relevant federal statutes and regulations, EPA program and guidance documents, federal internal control standards, and previous GAO and EPA Inspector General reports. We also reviewed EPA’s fiscal year 2011–2015 strategic plan; EPA’s fiscal year 2011 and 2012 annual performance plans; EPA’s budget justification documents for fiscal years 2012 and 2013; Office of Pesticide Programs’ (OPP) pesticide registration work plans for fiscal years 2001 through 2012; OPP notices of pesticide registration for conditionally registered pesticides; and Federal Register notices related to OPP’s registration decisions. In addition, we interviewed OPP officials and reviewed documentation they provided to obtain further information and clarification on EPA’s conditional registration process, including any planned responses to internal review or external stakeholder concerns. Furthermore, we reviewed recent literature related to pesticide registration, including information and documents found on the websites of a variety of consumer, environmental, industry, legal, producer, science, and state government organizations. To examine the number of conditional pesticide registrations EPA has granted and the basis for them, we requested that OPP provide us with summary data on (1) the number of pesticide registrations currently in conditional status and how long they have been in this status; (2) the total number of current pesticide registrations (conditional and unconditional); and (3) for fiscal years 1997 through 2011, the number of conditional registrations granted for each year and the basis on which these were granted. We asked for information for 1997 through 2011 because reviewing registrations from this period could address the concerns that environmental and other groups raised that some pesticides may have been in conditional status for many years and also take into account key changes made to the pesticide registration and tolerance setting We intended to assess the reliability processes that occurred after 1996.of the data that we requested from EPA and conduct electronic testing on data fields necessary for our analysis; however, after interviewing OPP officials and reviewing past GAO, Inspector General, and EPA contractor studies examining OPP’s data management, especially its use of the Office of Pesticide Programs Information Network (OPPIN), we concluded that EPA could not provide us with sufficiently reliable data for obtaining summary level information on conditional pesticide registrations. In the absence of these data, we discussed with OPP officials the capabilities and limitations of OPPIN and any potential work-arounds they employ or are planning for. In addition, we reviewed and discussed with these officials a recent analysis of OPPIN data that OPP conducted to determine how many of the conditional registrations granted were outside of the permissible circumstances outlined in the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA). The results of this analysis were included in an OPP internal review report and posted on OPP’s website. To address the second objective—the extent to which EPA ensures that registrants submit the additional data required as part of conditional registrations and reviews these data—we interviewed OPP officials about how they manage conditional registration data requirements. We also asked OPP to provide information on the number of pesticides that (1) have a conditional registration, but the registrant has not submitted the additional required data by the specified due date; (2) have a conditional registration and the registrant has submitted the additional required data, but EPA has not reviewed these data; (3) still have a conditional registration, even though the registrant has submitted, and EPA has reviewed, the additional required data; and (4) had been changed from conditional to unconditional status. However, OPP officials said they could not provide these data because they do not have an automated data system that tracks this information, but that they did analyze a subset of 544 conditional registrations to try to determine whether registrants had submitted the required data and posted the results of this analysis on OPP’s website. To obtain the views of relevant stakeholders on EPA’s conditional registration process and ways, if any to improve it, we administered a questionnaire to 35 professionals in the consumer, environmental, industry, legal, producer, science, and state government fields. We used a multistage process to identify our final nonprobability sample of 35 potential respondents. This process included (1) conducting a literature search to identify groups or individuals who had recently published articles on registration (including conditional registration) of pesticides, (2) asking agency and other relevant officials for recommendations of knowledgeable parties in each of these areas, and (3) asking prospective stakeholders for suggestions of other potential stakeholders. Through these methods, we arrived at an initial list of 148 potential stakeholders that were divided, based on their institutional affiliation, into the seven categories listed above. We then narrowed this list by (1) performing online searches of the professional affiliations of each stakeholder to determine whether they would likely have sufficient knowledge about EPA’s conditional registration of pesticides and (2) conducting screening interviews, by phone and e-mail, to determine whether the individuals were sufficiently familiar with EPA’s process for registering pesticides and to secure their commitment to participate in our survey. After this process was completed, we arrived at a list of 35 stakeholders to whom we sent, via e-mail, our questionnaire. The questionnaire asked about, among other things, (1) problems, if any, associated with each of the permissible situations for which registrations can be conditionally granted; (2) risks, if any, associated with registrants not submitting data required by a conditional registration in a timely manner; (3) pesticides with conditional registrations that stakeholders believe should not have been conditionally registered; and (4) suggestions for improving EPA’s conditional registration process. In preparing to administer this questionnaire, we conducted three pretests to ensure that the questions were clear, terminology was used appropriately, and the questionnaire was unbiased. We used the results of our pretests to revise the questions as needed. The questions were open-ended, and thus issues raised by stakeholders had to be “volunteered.” We did not ask each stakeholder to agree or disagree with particular issues. The administration period for this questionnaire was from July through September 2012. Of the 35 participating stakeholders, 24 provided complete, valid questionnaire responses. The organizations that participated as stakeholders were Akin Gump, LLP; American Chemistry Council; American Farm Bureau Federation; Beyond Pesticides; California Department of Pesticide Regulation; Center for Biological Diversity; Center for Environmental Health; Center for Food Safety; Center for Science in the Public Interest; Council of Producers and Distributors of Agrotechnology; CropLife America; Dow Agrosciences, LLC; Earthjustice; Environmental Working Group; Food and Water Watch; Florida Department of Agriculture and Consumer Services; Iowa Department of Agriculture; McDermott, Will and Emery, LLP; Natural Resources Defense Council; New York State Department of Environmental Conservation; Pesticide Action Network North America; Syngenta Crop Protection, LLC; Texas Department of Agriculture; and The Endocrine Disruption Exchange. The results of this questionnaire cannot be generalized to all parties knowledgeable about the conditional registration of pesticides; rather our analysis of the results of this questionnaire identifies common themes present in the responses of those who participated in our questionnaire. We analyzed stakeholder responses to the questionnaire to identify themes and develop summary findings. Two GAO analysts separately conducted this analysis and placed users’ responses into one or more categories, then compared these analyses. All initial disagreements regarding the categorizations of stakeholders’ responses were discussed and reconciled. The analysts then tallied the number of responses in each category. We conducted this performance audit from September 2011 to August 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, James R. Jones, Jr., Assistant Director; Jameal Addison; Kevin Bray; Kirsten B. Lauber; Robin Marion; Lisa Shames; Carol Herrnstadt Shulman; Kathryn Smith; and Lisa Turner made key contributions to this report. Colleen M. Candrl, Joyce Evans, Dan Royer, and Kiki Theodoropoulos also made important contributions to this report. Nanotechnology: Improved Performance Information Needed for Environmental, Health, and Safety Research. GAO-12-427. Washington, D.C.: May 21, 2012. Agricultural Chemicals: USDA Could Enhance Pesticide and Fertilizer Usage Data, Improve Outreach, and Better Leverage Resources. GAO-11-37. Washington, D.C.: November 4, 2010. Pesticides on Tobacco: Federal Activities to Assess Risks and Monitor Residues. GAO-03-485. Washington, D.C.: March 26, 2003. Agricultural Pesticides: Management Improvements Needed to Further Promote Integrated Pest Management. GAO-01-815. Washington, D.C.: August 17, 2001. Children and Pesticides: New Approach to Considering Risk is Partly in Place. GAO/HEHS-00-175. Washington, D.C.: September 11, 2000. Pesticides: Improvements Needed to Ensure the Safety of Farmworkers and Their Children. GAO/RCED-00-40. Washington, D.C.: March 14, 2000. GAO Products Related to Pesticide Regulation. GAO/RCED-95-272R. Washington, D.C.: September 26, 1995. Pesticides: Information Systems Improvements Essential for EPA’s Reregistration Efforts. GAO/IMTEC-93-5. Washington, D.C.: November 23, 1992. Pesticides: EPA’s Information Systems Provide Inadequate Support for Reregistration. GAO/T-IMTEC-92-3. Washington, D.C.: October 30, 1991. Pesticides: EPA’s Formidable Task to Assess and Regulate Their Risks. GAO/RCED-86-125. Washington, D.C.: April 18, 1986. Natural Resources and Environment: Delays and Unresolved Issues Plague New Pesticide Protection Programs. GAO/CED-80-32. Washington, D.C.: February 15, 1980.
As of September 2010, more than 16,000 pesticides were registered for use in the United States, according to EPA. EPA reviews health and environmental effects data submitted by a company and may register a pesticide or, alternatively, grant a "conditional registration" for a pesticide under certain circumstances, even though some of the required data may not have been submitted or reviewed. The company must provide the missing data within a specified time. In 2010, environmental and other groups charged that EPA had overused conditional registrations and did not appear to have a reliable system to identify whether the required data had been submitted. GAO was asked to examine issues related to EPA's use of conditional registrations for pesticides. This report examines the (1) number of conditional registrations EPA has granted and the basis for these, (2) extent to which EPA ensures that companies submit the required additional data and EPA reviews the data, and (3) views of relevant stakeholders on EPA's use of conditional registrations. GAO reviewed EPA data and surveyed stakeholders, among other things. The total number of conditional registrations granted is unclear, as the Environmental Protection Agency (EPA) reports that its data are inaccurate for several reasons. First, the database used to track conditional registrations does not allow officials to change a pesticide's registration status from conditional to unconditional once the registrant has satisfied all requirements, thereby overstating the number of conditional registrations. Second, EPA staff have misused the term "conditional registration," incorrectly classifying pesticide registrations as conditional when, for example, they require a label change, which is not a basis in statute for a conditional registration. According to EPA documents and officials, weaknesses in guidance and training, management oversight, and data management contributed to these misclassification problems. For example, according to EPA documents, there was limited, organized management oversight to ensure that regulatory actions were not misclassified as conditional registrations. As of July 2013, EPA officials told GAO that the agency has taken or is planning to take several actions to more accurately account for conditional registrations, including beginning to design a new automated data system to more accurately track conditional registrations. The extent to which EPA ensures that companies submit additional required data and EPA reviews these data is unknown. Specifically, EPA does not have a reliable system, such as an automated data system, to track key information related to conditional registrations, including whether companies have submitted additional data within required time frames. As a result, pesticides with conditional registrations could be marketed for years without EPA's receipt and review of these data. In the absence of a reliable system for managing conditional registrations, EPA relies on a variety of routine program operations, such as its review of a company's changes to a pesticide registration, to discover that data are missing. However, these methods fall short of what is needed because they are neither comprehensive nor do they ensure timely submission of these data. According to federal internal control standards, EPA's lack of a reliable system for managing conditional registrations constitutes an internal control weakness because the agency lacks an effective mechanism for program oversight and decision making. Stakeholders GAO surveyed--representatives of consumer, environmental, industry, legal, producer, science, and state government groups--generally said EPA needs to improve its conditional registration process. For example, some stated EPA should improve its data systems for tracking conditional registrations to ensure that required data are submitted and reviewed in a timely manner. However, stakeholder views varied on the benefits and disadvantages of conditionally registering pesticides. For example, some consumer, industry, legal, producer, and state government stakeholders stated that the conditional registration process promotes innovation by bringing new technologies to the marketplace more quickly. In contrast, some consumer, environmental, legal, science, and state government stakeholders voiced concerns that conditional registration allows products with safety that has not been fully evaluated into the marketplace. GAO recommends, in part, that EPA consider and implement options for an automated system to better track conditional registrations. EPA agreed with GAO's recommendations and noted specific actions it will take to implement them.
The fiscal year 2005 budget is the fifth consecutive budget request where IRS is proposing increased staffing for enforcement and the third where it has identified internally-generated savings to help fund the increase. The 2005 budget proposes that, of the $377.3 million for new initiatives to be paid for either through new funding and reinvested savings, $315.2 million or 84 percent go to enforcement. In the past, IRS has not been able to realize all the projected savings intended to help fund enforcement staffing increases. In addition, other priorities, including unbudgeted expenses and taxpayer service, have consumed budget increases and internally- generated savings. This raises the question about IRS’s ability to increase enforcement staffing as planned in 2005. IRS’s fiscal year 2005 budget request is $10.7 billion, up $489.8 million or 4.8 percent from the amount appropriated for fiscal year 2004. IRS’s request identifies a total of $750.3 million of new proposed spending— $377.3 million for new initiatives, primarily enforcement, and $373 million to maintain current operations (such as salary increases included in the budget). IRS plans to fund the additional spending from three sources— budget increases, program reductions, and internal savings. IRS is proposing to receive $489.8 million in budget increases, gain $149.7 million from program reductions, primarily from reducing the amount for BSM, and save $110.8 million from process improvements. For context about IRS’s staff resources, we provide information about how IRS allocated those resources in fiscal year 2003 to various functions including returns processing, taxpayers service and enforcement in appendix I. In its 2005 budget request, IRS makes increasing enforcement staffing its priority. IRS identified its priority enforcement areas as: promoters of tax schemes, misuses of offshore transactions, uses of corporate tax avoidance transactions, underreporting of income by higher income taxpayers, and failures to file and pay large amounts of employment taxes. IRS is proposing to spend $377.3 million on new initiatives; $315.2 million, or 84 percent is slated for enforcement initiatives. The rest is for infrastructure projects to, for example, consolidate paper processing operations. The major enforcement initiatives include: $90.2 million and 874 Full Time Equivalents (FTEs) to target noncompliance by small business and self-employed taxpayers by hiring field examination and collection, automated collection and service center- based compliance staff; $65 million and 260 FTEs for additional criminal investigation resources to combat corporate fraud, increase tax enforcement, and enhance criminal investigation capabilities by hiring additional criminal investigators and special agents to focus on corporate financial fraud, general tax enforcement, improve forensic electronic evidence capabilities and increase special agent support staff; $36 million and 207 FTEs to combat corporate abusive tax shelters by devoting more resources to reviewing offshore transactions; $15.5 million and 175 FTEs to increase individual taxpayer compliance by focusing on the full spectrum of individual taxpayer noncompliance, including nonfilers, nonpayers of tax owed, and more tax assessments on underreported income; and $15.1 million and 144 FTEs to combat diversions of charitable assets and stop abusive transactions in the tax-exempt area by focusing on terrorism funding and civil fraud by charities, and targeting tax avoidance strategies by charities. IRS is proposing to spend $373 million to maintain current operations, which would cover increased costs of continuing current operations. The increased costs include $133 million for salary increases assumed in IRS’s budget. IRS’s 2005 budget assumes a federal salary increase of 1.5 percent. If the actual federal salary increase is higher than 1.5 percent, IRS will have to cover the unbudgeted portion of the increase. For 2005, IRS has identified $110.8 million in savings to be generated from process and system improvements. Key savings initiatives include: $34.0 million and 408 FTEs from a reorganization of the information systems function that will consolidate three parallel organizations, and reduce staff, to improve operations and support to IRS customers; $15.7 million and 220 FTEs from consolidating insolvency and exam/collection field support from over 80 to 5 or fewer locations; $14.9 million and 167 FTEs from the termination of transition employees who could not be placed when offices closed and jobs were shifted when IRS reorganized into business units; and $5.1 million and 130 FTEs due to more electronic filing. In addition to the savings, IRS has identified $149.7 million in program reductions to help fund its 2005 spending priorities. The reductions include $102.7 million due to reductions in the scope of certain BSM projects (discussed later in more detail) and $18 million in overhead reductions. In its last five budget requests, IRS has asked for more enforcement staff, to be funded partly by budget increases and partly through internal savings. Despite budget requests that were almost fully funded and despite achieving some savings, the number of skilled enforcement staff actually declined. The budget increases and savings were consumed by other priorities including unbudgeted expenses. Table 1 shows that IRS has received almost 98 percent or more of its budget requests since fiscal year 2002. Table 2 shows that in 2003 IRS realized about 34 percent of its anticipated budget savings and about 41 percent of its anticipated staff savings. In 2004, IRS officials believe they did a better job in both estimating and tracking the savings and estimate they will be able to reinvest 77 percent of the anticipated budget savings and 53 percent of the anticipated staff savings. IRS should be commended for identifying saving and reinvestment opportunities in its budget request. While IRS has been unable to achieve its savings targets, we recognize that budget preparation begins about 18 months before the beginning of the fiscal year, making it difficult to accurately predict future savings. IRS officials believe they are doing a better job both estimating and tracking savings. Nevertheless, IRS’s history raises questions about its ability to achieve the 2005 savings targets. Despite budget requests that were almost fully funded, and despite realizing some savings, IRS has been unable to achieve the enforcement staffing increases projected in its recent budgets. As shown in figure 1, the number of revenue agents (those who audit complex returns), revenue officers, (those who do field collection work), and special agents (those who performed criminal investigations) has decreased over 21 percent between 1998 and 2003. The Large- and Mid-size Business (LMSB) operating division, responsible for combating abusive corporate tax shelters and assuring that large businesses are in compliance with the tax laws, is an example of these staffing trends. According to LMSB officials, at the beginning of fiscal year 2002, they had 5,047 revenue agents on board. This was reduced to 4,431 at the beginning of fiscal year 2004—a 12 percent reduction—due to attrition and the inability to hire. The declines in enforcement staff have been associated with declines in enforcement efforts. For example, audit rates are below the levels of the mid-1990s, even after accounting for recent increases. Figure 2 shows the trend in total audits of individual taxpayers since 1993. Total audits includes both face-to-face audits and less complex correspondence audits. IRS and GAO have reported that IRS has experienced steep declines in audit rates since 1996, although the audit rate has slowly increased since 2000. The link between the decline in enforcement staff and the decline in enforcement actions, such as audit rates, is complicated by other factors, such as changes over time in the mix of complex and simple enforcement actions. However, IRS officials have stated that the decline in enforcement staff has restricted their enforcement efforts. For example, LMSB officials stated that they hired about 200 fewer revenue agents than planned in fiscal year 2003 and expect to hire about 95 fewer in fiscal year 2004 because of budget constraints. They estimated that had this hiring occurred as planned, LMSB could have examined an additional 505 returns and 1,877 returns in fiscal years 2003 and 2004 respectively. In addition, the 2005 budget request attributes the decline in enforcement actions to the decline in enforcement staff. The impact of the recent declines in enforcement staffing and enforcement actions on taxpayers’ rate of voluntary compliance is not known. This leaves open the question of whether these declines are eroding taxpayers’ incentives to voluntarily comply. As we have reported, the IRS’s National Research Program (NRP), which is developing new estimates of taxpayer compliance, is underway. These estimates will be the first based on data more recent than 1988, when IRS last measured voluntary compliance. According to IRS officials the new estimates should be available in 2005. Until the NRP estimates are available, IRS lacks current data on compliance including changes in taxpayers’ compliance rate. NRP is important for several reasons beyond measuring compliance. It is intended to help IRS better target its enforcement actions, such as audits, on non-compliant taxpayers, and minimize audits of compliant taxpayers. It could also help IRS better understand the impact of taxpayer service on compliance. Priorities other than enforcement, including unbudgeted expenses and taxpayer service, have consumed IRS’s budget increases and savings over the last few years. Unbudgeted expenses include unfunded portions of the annual pay increases, that can be substantial given IRS’s large workforce, and other costs, such as postage increases and higher-than- budgeted rent increases. According to IRS officials, these unbudgeted expenses accounted for $154 million of IRS’s budget in 2002; $311 million of IRS’s budget in 2003; and $169 million of IRS’s budget in 2004. IRS officials also told us that they anticipate having to cover unbudgeted expenses in 2005. As of March 2004, they were projecting unbudgeted salary increases for fiscal year 2005 of at least $100 million. This projection could change since the actual federal salary increase for 2005 has not been finalized. Another reason for the reduction in enforcement staff has been IRS’s emphasis on improving service to taxpayers. According to IRS officials, much of this improvement has been at the expense of additional resources for enforcement and has resulted in less hiring of new staff for enforcement activities. IRS is requesting about $1.93 billion (including 7,385 staff years) in information technology (IT) resources for fiscal year 2005. This includes (1) $285 million for the agency’s multiyear capital account that funds contractor costs for the Business Systems Modernization (BSM) program and (2) about $1.64 billion for information systems, of which $1.55 billion (including 7,137 staff years) are for operations and maintenance. BSM is important for IRS’s future because it has the potential for long-term efficiency gains without major increases in staffing or other resources. Consistent with the Clinger-Cohen Act of 1996 and the Government Performance and Results Act of 1993, OMB guidance on budget preparation and submission require that, before requesting multiyear funding for capital asset acquisitions, agencies develop sufficient justification for these investments. The guidance requires that agencies implement key IT management practices, including an integrated IT architecture and a process for managing information systems projects as investments. In addition, agencies are to prepare business cases that reasonably demonstrate how proposed investments support agency missions and operations, and provide positive business value in terms of expected costs, benefits, and risks. Beginning in 1995, when IRS was involved in an earlier attempt to modernize its tax processing systems, and continuing since then, we have made recommendations that IRS implement fundamental modernization management capabilities before acquiring new systems. We recommended, among other things, that IRS (1) put in place an enterprise architecture (modernization blueprint) to guide and constrain its business system investments, and (2) implement disciplined processes for investment decision management and system development management. In response to our recommendations, IRS developed and is using an enterprise architecture, which describes IRS’s current and target business and technology environments, and the associated high-level transition strategy that identifies and conceptually justifies needed investments to guide the agency’s transition over many years from its current to its target architectural state. In addition, IRS also implemented a capital planning and investment control process for developing business cases and managing BSM projects as part of an investment portfolio, as well as a systems life cycle management methodology, which IRS refers to as the enterprise life cycle. IRS’s $285 million request for the BSM account for fiscal year 2005 is based on its enterprise architecture as well as its related investment management process and life cycle management methodology. IRS’s BSM budget request constitutes a reduction of greater than 25 percent from the planned fiscal year 2004 spending level of $388 million, and reflects the agency’s decision, in light of ongoing project delays, to focus on a smaller modernization project portfolio in an effort to better ensure cost targets are maintained, project schedules are met, and the promised projects are delivered. Pursuant to statute, funds from the BSM account are not available for obligation until IRS submits to the congressional appropriations committees for approval an expenditure plan that meets certain conditions. In January 2004, IRS submitted an expenditure plan seeking approval to obligate funds from the BSM account for its planned fiscal year 2004 projects and program-level initiatives. IRS’s fiscal year 2004 plan reported the deployment of modernization projects during fiscal year 2003 that have benefited taxpayers and the agency, including an application that provides refund status for the advanced child tax credit and the first release of a new human resources system, HR Connect, which has now been delivered to 73,000 IRS employees. In our briefing to the staff of the relevant appropriations subcommittees on the results of our review of the fiscal year 2004 expenditure plan, we reported that IRS has made progress in implementing our prior recommendations to improve its modernization management controls and capabilities. However, certain of these controls and capabilities related to configuration management, human capital management, cost and schedule estimating, and contract management have not yet been fully implemented or institutionalized. Our analysis has shown that weaknesses in these controls and capabilities have contributed, at least in part, to cost and schedule shortfalls experienced by most BSM projects. In the absence of appropriate management controls, systems modernization projects will likely be hampered by additional costs and schedule shortfalls. The reasons are twofold: the tasks associated with those projects that are moving beyond design and into development are, by their nature, more complex and risky. Also, the fiscal year 2004 expenditure plan supports progress toward the later, more complex phases of key projects as well as continued development of other projects. Based on IRS’s expenditure plans, BSM projects have consistently cost more and taken longer to complete than originally estimated. In its fiscal year 2004 plan, IRS disclosed that key BSM projects have continued to experience cost and schedule shortfalls against prior commitments. Table 4 shows the life cycle variance in cost and schedule estimates for completed and ongoing BSM projects. These variances are based on a comparison of IRS’s initial and revised cost and schedule estimates to complete initial operation or full deployment of the projects. We did not independently validate planned projects’ cost estimates or confirm, through system and project management documentation, the validity of IRS-provided information on the projects’ content and progress. As the table indicates, the cost and schedule estimates for full deployment of the e-Services project have increased by just over $86 million and 18 months, respectively, which included a significant expansion from the initial project scope. In addition, the estimated cost for the full deployment of Customer Account Data Engine (CADE) Release 1 has increased by almost $37 million, and project completion has been delayed by 30 months. In addition to the modernization management control shortcomings discussed above, our work has shown that the increases and delays were caused, in part, by inadequate definitions of systems requirements. As a result, additional requirements have been incorporated into ongoing projects. increases in project scope. For example, the e-Services project has changed significantly since the original design. The scope was broadened by IRS to provide additional benefits to internal and external customers. underestimating project complexity. This factor has contributed directly to the significant delays in the CADE release 1 schedule. competing demands of projects for test facilities. Testing infrastructure capacity is insufficient to accommodate multiple projects when testing schedules overlap. project interdependencies. Delays with one project have had a cascading effect and have caused delays in related projects. These cost overruns and schedule delays impair IRS’s ability to make appropriate decisions about investing in new projects, delay delivery of benefits to taxpayers, and postpone resolution of material weaknesses affecting other program areas. Producing reliable estimates of expected costs and schedules is essential to determining a project’s cost-effectiveness. In addition, it is critical for budgeting, management, and oversight. Without this information, the likelihood of poor investment decisions is increased. Schedule slippages delay the provision of modernized systems’ direct benefits to the public. For example, as table 4 shows, slippages in CADE will delay IRS’s ability to provide faster refunds and respond to taxpayer inquiries on a timely basis. Delays in the delivery of modernized systems also affect the remediation of material internal management weaknesses. For example, the Custodial Accounting Project is intended to address a material weakness in IRS’s financial reporting process and provide a mechanism for tracking and summarizing individual taxpayer transactions. This release has yet to be implemented, and a revised schedule has not yet been determined. In addition, the Integrated Financial System is intended to address financial management reporting weaknesses. When IRS submitted its fiscal year 2003 BSM expenditure plan, Release 1 of the Integrated Financial System was scheduled for delivery on October 1, 2003. However, it has yet to be implemented, and additional cost increases are expected. Given the continued cost overruns and schedule delays experienced by these BSM projects, IRS and the prime systems integration support (PRIME) contractor, Computer Sciences Corporation (CSC), initiated and recently completed several in-depth and more comprehensive assessments of BSM. These assessments revealed several significant weaknesses that have driven project cost overruns and schedule delays and also provided a number of recommendations for IRS and CSC to address the identified weaknesses and reduce the risk to BSM. The deficiencies identified are consistent with our prior findings. IRS developed a BSM action plan to address the findings and recommendations resulting from these assessments. IRS expects to complete implementation of its actions by the end of the calendar year. Because of the significant risks associated with the findings of these various assessments, continued monitoring by IRS and validation of the effectiveness of corrective actions is critical to reducing the likelihood of additional cost overruns and schedule delays. It will be important for IRS to continue its efforts to balance the scope and pace of the program with the agency’s capacity to handle the workload, and to institutionalize the management processes and controls necessary to resolve the deficiencies identified by our reviews and the recent program assessments. Meeting these challenges and improving performance are essential if IRS and the PRIME contractor are to successfully deliver the BSM program. The Paperwork Reduction Act (PRA) requires federal agencies to be accountable for their IT investments and responsible for maximizing the value and managing the risks of their major information systems initiatives. The Clinger-Cohen Act of 1996 establishes a more definitive framework for implementing the PRA’s requirements for IT investment management. It requires federal agencies to focus more on the results they have achieved and introduces more rigor and structure into how agencies are to select and manage IT projects. Leading private- and public-sector organizations have taken a project- or system-centric approach to managing not only new investments but also operations and maintenance of existing systems. As such, these organizations identify operations and maintenance projects and systems for inclusion in budget requests; assess these projects or systems on the basis of expected costs, benefits and risks to the organization; analyze these projects as a portfolio of competing funding options; and use this information to develop and support budget requests. This focus on projects, their outcomes, and risks as the basic elements of analysis and decision-making is incorporated in the IT investment management approach that is recommended by OMB and GAO. By using these proven investment management approaches for budget formulation, agencies have a systematic method, on the basis of risk and return on investment, to justify what are typically very substantial information systems operations and maintenance budget requests. In our assessment of IRS’s fiscal year 2003 budget request, we reported that the agency did not develop its information systems operations and maintenance request in accordance with the investment management approach used by leading organizations. We recommended that IRS prepare its future budget requests in accordance with these best practices. To address our recommendation, IRS agreed to take the following actions: develop an activity-based cost model to plan, project, and report costs for business tasks/activities funded by the information systems budget; develop a capital planning guide to implement processes for capital planning and investment control, budget formulation and execution, business case development, and project prioritization; and implement a process for managing all information systems investments as a portfolio, patterned after the BSM program. IRS has made progress in implementing investment management best practices in developing and supporting its information systems budget request. IRS officials reported that the agency is managing all information systems funding requirements as a portfolio within Treasury’s IT investment portfolio system, and preparing business cases for many of its operational program activities, as required by OMB. According to IRS, these business cases are updated on a periodic basis and are evaluated within the context of the agency’s overall IT funding portfolio. IRS plans to align this portfolio management process with the capital planning and investment control system now being implemented to provide a uniform process to select, manage, and control all IT investments, including modernization, enhancements, and sustaining operations. Although progress has been made, IRS has not yet completed all of its planned actions to implement our prior recommendation. Completion of IRS’s capital planning and investment control guide has been delayed due to changing roles and responsibilities within the Modernization and Information Technology Services organization, and thus was not used in preparing the fiscal year 2005 information systems budget request. According to IRS, the capital planning guidance will not be completed until September 2004. In addition, as of March 2004, IRS has not yet developed an activity-based cost accounting system to enable it to account for the full cost of operations and maintenance projects and determine how effectively IRS projects are achieving program goals and mission needs. This cost model, which is being developed in conjunction with the Integrated Financial System modernization project, has been delayed, and due to Integrated Financial System schedule delays, will not be available until the fiscal year 2008 budget formulation cycle. Until IRS implements the capital planning and investment control guidance and the activity- based cost model and incorporates them into the preparation of its information systems budget request, the agency will not be able to ensure that the information systems operations and maintenance request is adequately supported. IRS’s filing season performance through mid-March has improved in most areas compared to recent years, based on data we reviewed on five key filing season activities—paper and electronic processing, telephone assistance, IRS’s Web site, and walk-in assistance. However, the accuracy of tax law answers provided by IRS telephone staff declined. Although we cannot quantify the connection between these improvements and IRS’s actions, they appear to represent a payoff from IRS’s modernization and an increased emphasis on service since the IRS Restructuring and Reform Act of 1998. Table 5 summarizes IRS’s filing season performance so far this year compared to recent years. The following sections will address IRS’s specific performance in key areas. According to IRS officials, tax industry representatives and data reviewed, the 2004 filing season is progressing smoothly (meaning without disruptions in IRS computer systems used in processing that would have a negative impact on taxpayers) and IRS is either meeting or exceeding its goals for the number of days to process an individual income tax returns, depending on the type of return. As table 5 shows, through March 19, 2004, IRS has processed about 63 million individual tax returns—of which 43 million were received electronically, which is about 4.4 million more electronically filed returns than this time last year. IRS officials have attributed this year’s performance, in part, to having planned appropriately for issues such as correcting errors related to the advanced child tax credit. Through March 12, 2004, IRS had identified about 2.7 million individual tax returns with errors, with approximately 1.6 million related to the advanced child tax credit. Electronic filing has grown from the same time last year. It has also grown by about 250 percent overall—from about 15 million returns in 1996 to about 53 million in 2003. Although electronic filing continues to grow, IRS is not on track to reach the long-term electronic filing goal of 80 percent by 2007 set by Congress in the IRS Restructuring and Reform Act of 1998. IRS officials recognize that they will not achieve the goal of having 80 percent of all individual income tax returns filed electronically by 2007. However, IRS officials told us they will continue to strive to achieve that goal in the future. Moreover, as we reported last year, the growth rate from 1996 through 2003 has been generally decreasing, with the 13 percent growth rate in 2003 representing the smallest percentage increase in the number of individual tax returns filed electronically since 1996. Although the current growth rate is about 11 percent, according to IRS data, the number of electronic filings is ahead of estimates at this time. Consequently, IRS officials believe IRS will meet and might exceed the annual growth rate goal of 12 percent by the year’s end. Growth in electronic filing remains a key part of IRS’s modernization strategy. Electronic filing has allowed IRS to reduce resources devoted to processing (discussed in appendix I) and begin consolidating paper processing centers. It also reduces errors because IRS would not have to transcribe tax returns information and some up-front checks are built into electronic filing. Finally, taxpayers get refunds quicker with electronic filing—IRS’s goal for refunds for electronically filed returns is about half the 40 days that IRS allows for refunds for returns filed on paper. IRS has implemented numerous initiatives over the years intended to increase electronic filing usage. IRS’s new major electronic filing initiatives this year are related to business not individual income tax returns. They are modernized E-File, which allows the electronic filing of corporate income tax form 1120 and E-Services, which is a suite of Internet services offered to tax practitioners such as electronic account resolution and transcript delivery. IRS officials do not expect these initiatives to dramatically increase electronic filing of individual tax returns this year, because taxpayers and practitioners will need to adjust their behavior and take advantage of the new services. However, these initiatives are important, because they should increase the willingness of tax practitioners to file both corporate and individual tax returns electronically in future filing seasons, which can currently be done only on a limited basis for corporate returns. IRS made some changes to improve the Free File Alliance program, which began last year to promote electronic filing of individual income tax returns. As of March 7, 2004, IRS had received almost 2.5 million free file tax returns compared to 2.0 million for the same time last year—an increase of 24 percent. One issue with the Free File program is that IRS cannot determine how many of the Free File users are new electronic filers. We plan to follow up on this issue as part of our annual filing season report. Access to IRS’s toll-free telephone lines has improved over the last two years, although account accuracy (the accuracy of answers to questions from taxpayers about the status of their accounts) has stabilized and tax law accuracy declined. As table 5 shows, as of March 13, 2004, IRS had received 29 million telephone calls. The percentage of taxpayers that attempted to reach an assistor and actually got through and received service—referred to as the Customer Service Representative (CSR) level of service—increased to 84 percent, which is 2 percentage points over the same period last year and 22 percentage points over the same period in 2002. According to IRS officials, the gains in CSR level of service are largely due to continued improvements resulting from increased specialization, improved technology, and continued focus on maintaining telephone staffing. IRS estimates that accounts accuracy is essentially the same this year as for the last two years at this time. As shown in table 5, taxpayers who called about their accounts received correct information an estimated 89 percent of the time in 2004. IRS officials said that accounts accuracy rates remained stable, because the accounts workload has remained relatively stable. At the same time, table 5 shows that IRS estimates that tax law accuracy declined from 84 percent in 2002 and 82 percent last year to 76 percent so this year. IRS officials said that tax law accuracy rates declined because formatting changes made in 2003 to the guide CSRs use to help them answer questions have not enhanced the usability as IRS anticipated. According to IRS, although training was provided to the staff for the changes to their assigned subjects, IRS underestimated the impact these changes would have on overall quality. Also, IRS officials said they have begun redesigning the CSRs’ guide and are continuing to conduct detailed analysis of quality data to identify immediate opportunities to improve the accuracy of service. IRS’s Web site use has increased over the last 2 years as shown in table 6. Also, an independent Web site rater reported that, for 7 of out 10 weeks of the filing season, IRS’s Web Site has ranked in the top 10 out of 40 in a government Web site index for time it took to download information. Over the last 2 years, IRS has added two features to assist taxpayers, which likely contributed to the increased usage of the IRS Web site. In fiscal year 2003, IRS added the “Where’s My Refund?” and in 2004 added “Remember Your Advanced Child Tax Credit” features. The “Where’s My Refund?” feature enables taxpayers to access IRS’s Web site to determine if IRS received their tax return, whether their refund was processed, and if processed, when approximately to expect the refund. Table 5 shows that as of March 20, 2004, the use of this feature was up by 53 percent from last year, from about 9.3 million attempts to about 14.3 million. The “Remember Your Advanced Child Tax Credit” enables a person to access IRS’s Web site to determine the amount of the advanced child tax credit they received. As of March 21, 2004, about 8.5 million accesses have been made to the “Remember Your Child Tax Credit” feature. Overall we found that IRS’s Web site continues to improve when it comes to providing services to taxpayers. However, we continue to have concerns about the forms and publication search function. We found that the forms and publication search function still does not always make the most pertinent information readily available. For example, when we typed, “earned income tax credit” into the forms and publication search function, Publication 596—the primary publication on the earned income tax credit—was the 79th item on the list and we had to scroll through eight pages to find it. The number of taxpayers receiving assistance at IRS walk-in sites continued to decline. At any one of IRS’s over 400 walk-in sites, taxpayers get various types of assistance, including answers to tax law questions, assistance with their accounts, and return preparation assistance (generally for low income taxpayers). The number of people who received assistance at an IRS walk-in site declined by 11 percent compared to the same period last year. IRS continues to restrict free tax preparation services to, for example, taxpayers with an annual gross income level of $35,000 or less, because of the labor intensive nature of that work and to enable staff to concentrate on other services that only IRS can provide such as account assistance. IRS reduced the number of staff available for return preparation by 20 percent from 2003. As the data in table 5 indicate, the number of returns being prepared has decreased by about 36 percent over this time last year. These trends are consistent with ones we have previously reported for recent filing seasons. Figure 4 shows a downward trend in the overall assistance provided and in the return preparation at the walk-in sites. Sites staffed by volunteers certified by IRS do not provide the range of services IRS provides, such as account assistance, and operate primarily during the filing season. IRS is promoting these as alternatives to its walk- in assistance sites for certain types of service. IRS works to ensure that walk-in sites have a listing of services, hours, and locations of the volunteer sites in their area. As of March 2004, there are approximate 11,600 volunteer sites. IRS also promotes its telephone operations and Web site at its walk-in sites as well. The quality of tax law assistance provided at IRS’s walk-in sites in 2004 was comparable to the same period last year. This conclusion is based on TIGTA reviews through February 2004. Congress has been supportive of IRS’s efforts to improve service to taxpayers and increase enforcement staff and IRS has succeeded at the former. However, despite budgets that were almost fully funded and realizing savings through efficiency gains, IRS has not been able to increase enforcement staff. In fact, staffing of key enforcement occupations has declined. The declines in IRS’s enforcement staff and the related declines in its enforcement efforts raise concerns that taxpayers’ incentives to voluntarily comply with their tax obligations could be eroding. Strengthening enforcement programs by increasing staffing while providing a high level of taxpayer service will continue to be a challenge for IRS. Unbudgeted costs are expected to compete for the funds IRS has allocated in its 2005 budget request for new spending including the enforcement initiatives. If, as has been the case in recent years, IRS fails to realize all expected savings then the funds available for new spending would be further reduced. One option for increasing enforcement staff in the near-term is to reconsider the level and types of service IRS provides to taxpayers. Taxpayer services are much improved raising a question about the appropriate balance to strike between investing in further service improvements and enforcement. At the same time, the use of IRS’s walk-in assistance sites is declining. The improvements in telephone service, increased Web site use, and the availability of volunteer sites raise a question about whether IRS should continue to operate as many walk-in sites. Reconsidering the level and types of service is an option—but not a recommendation—to be considered by IRS management and the Congress. The challenge of increasing IRS’s enforcement staff highlights the importance of succeeding with NRP and BSM. NRP should, if completed successfully, provide the first new data to estimate the voluntary compliance rate since IRS last estimated the compliance rate using 1988 data. The new estimates could have implications for future IRS budgets. If compliance rates are comparable to those estimated using 1988 data, the pressure to increase IRS’ s enforcement staff would likely diminish. If, however, compliance rates are down, the pressure to increase enforcement staff and the pressure on IRS’s budget could increase. BSM and related initiatives such as electronic filing hold the long-term promise of efficiency gains that could allow IRS to improve both taxpayer service and enforcement without significant budget increases. However, cost overruns and schedule delays associated with on-going BSM projects, along with planned reductions to the BSM project portfolio mean that many of these benefits will not be realized in the short term. As we have recommended, various management controls and capabilities need to be fully implemented and institutionalized. Otherwise the projects will likely encounter additional cost and schedule shortfalls. In our review of IRS’s 2004 budget request, we provided figures showing IRS’s expenditures and staff allocations in fiscal year 2002. Figures 5 and 6 illustrate how the Internal Revenue Service (IRS) allocated expenditures and staff in fiscal year 2003. Figure 5 shows that total expenditures increased from $10.4 billion in 2002 to $11.8 billion in 2003. While the division of expenditures across categories has generally remained the same as 2002 allocations, equipment increased from 4 to 6 percent of total expenditures from 2002 to 2003. Figure 6 shows IRS’s total staff resources have decreased slightly from 99,180 in 2002 to 98,381 in 2003. IRS’s allocation of staffing resources remained largely similar, but with a 1 percentage point decrease in the percent of staff years processing tax returns. The boundaries between the categories presented in these figures may not be well defined. For example, staff categorized under providing management and other services could also be considered under taxpayer service, processing, or compliance. Therefore, the figures are meant to provide a summary of how IRS uses its resources and should be interpreted with caution. However, the 1 percentage point decrease in staff years devoted to processing tax returns is important because it represents a cumulative payoff from electronic filing and shows the potential for shifting IRS resources from one area to another.
Effective tax administration requires a combination of quality customer service to help those who want to comply, and effective enforcement measures against those who do not. For the last few years, we have been reporting on improvements in taxpayer service and declines in enforcement. With respect to IRS's fiscal year 2005 budget request, the Subcommittee asked GAO to assess whether (1) IRS will be able to allocate more resources to enforcement, and (2) Business Systems Modernization (BSM) and other technology efforts will deliver cost savings and efficiencies in the immediate future. For the 2004 filing season performance, GAO was asked to assess IRS's performance in processing returns and providing assistance to taxpayers. IRS is requesting $10.7 billion in fiscal year 2005, 4.8 percent over 2004. This includes $377.3 million, primarily for additional enforcement staff, and $373 million for increased costs of maintaining current operations--funded from three sources--the budget increase, program reductions and internal savings. The request for more enforcement staff follows similar requests in IRS's past five budgets. Despite budget requests that were almost fully funded and despite realizing some savings in prior years, the number of most skilled enforcement staff declined by over 21 percent between 1998 and 2003 because of other priorities, including unbudgeted expenses. This history, and the expectation of unbudgeted costs in 2005, raises questions about whether IRS will be able to increase enforcement staff as planned. IRS's request also includes about $1.93 billion in information technology-- $285 million for BSM contractor costs and about $1.64 billion for information systems. Most BSM projects have experienced cost overruns and schedule delays postponing benefits expected under BSM. IRS reduced its BSM budget request to focus on fewer projects and is implementing plans to respond to known deficiencies. IRS has made progress in implementing investment management best practices for developing and supporting its information systems budget request. However, until IRS fully implements the improvements, its ability to develop supportable budget requests for information systems operations and maintenance will be limited. IRS's reported 2004 filing season performance in key areas improved, with the exception of the accuracy of tax law responses provided over the telephones to taxpayers, which declined. Also, the number of taxpayers seeking assistance at IRS's walk-in assistance sites declined as did the number of tax returns prepared at those sites.
Medicare is a federal health insurance program designed to assist elderly and disabled beneficiaries. Hospital insurance, or part A, covers inpatient hospital, skilled nursing facility, hospice care, and certain home health services. Supplemental medical insurance, or part B, covers physician and outpatient hospital services, laboratory and other services. Claims are paid by a network of 49 claims administration contractors called intermediaries and carriers. Intermediaries process claims from hospitals and other institutional providers under part A while carriers process part B claims. The intermediaries’ and carriers’ responsibilities include: reviewing and paying claims; maintaining program safeguards to prevent inappropriate payment; and educating and responding to provider and beneficiary concerns. Medicare contracting for intermediaries and carriers differs from that of most federal programs. Most federal agencies, under the Competition in Contracting Act and its implementing regulations known as the Federal Acquisition Regulation (FAR), generally may contract with any qualified entity for any authorized purpose so long as that entity is not debarred from government contracting and the contract is not for what is essentially a government function. Agencies are to use contractors that have a track record of successful past performance or that demonstrate a current superior ability to perform. The FAR generally requires agencies to conduct full and open competition for contracts and allows contractors to earn profits. Medicare, however, is authorized to deviate from the FAR under provisions of the Social Security Act enacted in 1965. For example, there is no full and open competition for intermediary or carrier contracts. Rather, intermediaries are selected in a process called nomination by provider associations, such as the American Hospital Association. This provision was intended at the time of Medicare’s creation to encourage hospitals to participate by giving them some choice in their claims processor. Currently, there are three intermediary contracts, including the national Blue Cross Blue Shield Association, which serves as the prime contractor for 26 local member plan subcontractors. When one of the local Blue plans declines to renew its subcontract, the Association nominates the replacement contractor. Carriers are chosen by the Secretary of Health and Human Services from a small pool of health insurers, and the number of such companies seeking Medicare claims-processing work has been dwindling in recent years. The Social Security Act also generally calls for the use of cost-based reimbursement contracts under which contractors are reimbursed for necessary and proper costs of carrying out Medicare activities but does not expressly provide for profit. Further, Medicare contractors cannot be terminated from the program unless they are first provided with an opportunity for a public hearing––a process not afforded under the FAR. Medicare could benefit from various contracting reforms. Freeing the program to directly choose contractors on a competitive basis from a broader array of entities able to perform needed tasks would enable Medicare to benefit from efficiency and performance improvements related to competition. It also could address concerns about the dwindling number of insurers with which the program now contracts. Allowing Medicare to have contractors specialize in specific functions rather than assume all claims-related activities, as is the case now, also could lead to greater efficiency and better performance. Authorizing Medicare to pay contractors based on how well they perform rather than simply reimbursing them for their costs, as well as allowing the program to terminate contracts more efficiently when program needs change or performance is inadequate, could also result in better program management. Since Medicare was implemented in 1966, the program has used health insurers to process and pay claims. Before Medicare’s enactment, providers feared that the program would give the government too much control over health care. To win acceptance, the program was designed to be administered by health insurers like Blue Cross and Blue Shield. Subsequent regulations and decades of the agency’s own practices have further limited how the program contracts for claims administration services. The result is that agency officials believe they must contract with health insurers to handle all aspects of administering Medicare claims, even though the number of such companies willing to serve as Medicare contractors has declined and the number of other entities capable of doing the work has increased. While using only health insurers for claims administration may have made sense when Medicare was created, that may be much less so today. The explosion in information technology has increased the potential for Medicare to use new types of business entities to administer its claims processing and related functions. Additionally, the need to broaden the pool of entities allowed to be contractors has increased in light of contractor attrition. Since 1980, the number of contractors has dropped by more than half, as many have decided to concentrate on other lines of business. This has left the program with fewer choices when one contractor withdraws, or is terminated, and another must be chosen to replace it. Since 1993, the agency has repeatedly submitted legislative proposals to repeal the provider nomination authority and make explicit its authority to contract for claims administration with entities other than health insurers. Just this month, the Secretary of Health and Human Services told the Senate Finance Committee that CMS should be able to competitively award contracts to the entities best qualified to perform these functions and stated that such changes would require legislative action. With such changes, when a contractor leaves the program, CMS could award its workload on a competitive basis to any qualified company or combination of companies—including those outside the existing contractor pool, such as data processing firms. Allowing Medicare to have separate contractors for specific claims administration activities—also called functional contracting—could further improve program management. Functional contracting would enable CMS to select contractors that are more skilled at certain tasks and allow these contractors to concentrate on those tasks, potentially resulting in better program service. For example, the agency could establish specific contractors to improve and bring uniformity to efforts to educate and respond to providers and beneficiaries, efforts that now vary widely among existing contractors. Currently, CMS interprets the Social Security Act and the regulations implementing it as constraining the agency from awarding separate contracts for individual claims administration activities, such as handling beneficiary inquiries or educating providers about program policies. Current regulations stipulate that, to qualify as an intermediary or carrier, the contracting organization must perform all of the Medicare claims administration functions. Thus, agency officials feel precluded from consolidating one or more functions into a single contract or a few regional contracts to achieve economies of scale and allow specialization to enhance performance. CMS has had some experience with functional contracting under authority granted in 1996 to hire entities other than health insurers to focus on program safeguards. CMS has contracted with 12 program safeguard contractors (PSC) who compete among themselves to perform task- specific contracts called task orders. These entities represent a mix of health insurers, including many with prior experience as Medicare contractors, along with consulting organizations, and other types of firms. The experience with PSCs, however, makes clear that functional contracting has challenges of its own, which are discussed later in this testimony. Allowing Medicare to offer financial incentives to contractors for high- quality performance also may have benefits. According to CMS, the Social Security Act now precludes the program from offering such incentives because it generally stipulates that payments be based on costs. Contractors are paid for necessary and proper costs of carrying out Medicare activities but do not make a profit. Repeal of cost-based restrictions would free CMS to award different types of contracts–– including those that provide contractors with financial incentives and permit them to earn profits. CMS could test different payment options to determine which work best. If effective in encouraging contractor performance, such contracts could lead to improved program operations and, potentially, to lower administrative costs. Again, implementing performance-based contracting will not be without significant challenges. Allowing Medicare to terminate contractors more efficiently may also promote better program management. The Social Security Act now limits the agency’s ability to terminate intermediaries and carriers, and the provisions are one-sided. Intermediaries and carriers may terminate their contracts without cause simply by providing CMS with 180 days notice. CMS, on the other hand, must demonstrate, that (1) the contractor has failed substantially to carry out its contract or that (2) continuation of the contract is disadvantageous or inconsistent with the effective administration of Medicare. CMS must provide the contractor with an opportunity for a public hearing prior to termination. Furthermore, CMS may not terminate a contractor without cause as can most federal agencies under the FAR. In past years, the agency has requested statutory authority to eliminate the public hearing requirement and the ability of contractors to unilaterally initiate contract termination. Such changes would bring Medicare claims administration contractors under the same legal framework as other government contractors and provide greater flexibility to more quickly terminate poor performers. Eliminating contractors’ ability to unilaterally terminate contracts also may help address challenges the agency faces in finding replacement contractors on short notice. While Medicare could benefit from greater contracting flexibility, time and care would be needed to implement changes to effectively promote better performance and accountability and avoid disrupting program services. Competitive contracting with new entities for specific claims administration services in particular will pose new challenges to CMS–– challenges that will likely take significant time to fully address. These include preparing clear statements of work and contractor selection criteria, efficiently integrating the new contractors into Medicare’s claims processing operations, and developing sound evaluation criteria for assessing performance. Because these challenges are so significant, CMS would be wise to adopt an experimental, incremental approach. The experience with authority granted in 1996 to hire special contractors for specific tasks related to program integrity can provide valuable lessons for CMS officials if new contracting authorities are granted. If given authority to contract competitively with new entities, CMS would need time to accomplish several tasks. First among these would be development of clear statements of work and associated requests for proposals detailing work to be performed and how performance will be assessed. CMS has relatively little experience in this area for Medicare claims administration because current contracts instead incorporate by reference all regulations and general instructions issued by the Secretary of Health and Human Services to define contractor responsibilities. CMS has experience with competitive contracting from hiring PSCs. It did take 3 years to determine how best to implement the new authority through its broad umbrella contract, develop the statement of work, issue the proposed regulations governing the PSCs, develop selection criteria, review proposals, and select contractors. Program officials have told us they are optimistic about their ability to act more quickly if contracting reform legislation were enacted, given the lessons they have learned. However, we expect that it would take CMS a significant amount of time to develop its implementation strategy and undertake all the necessary steps to take full advantage of any changes in its contracting authority. CMS took an incremental approach to awarding its PSC task orders, and the same would be prudent for implementing any changes in Medicare’s claims administration contracting authorities. Even after new contractors are hired, CMS should not expect immediate results. The PSC experience demonstrates that it will take time for them to begin performing their duties. PSCs had to hire staff, obtain operating space and equipment, and develop the systems needed to ultimately fulfill contract requirements––activities that often took many months to complete. Without sufficient start-up time, new contractors might not operate effectively and services to beneficiaries or providers could be disrupted. Developing a strategy for how to incorporate functional contractors into the program and coordinate their activities is key. While there may be benefits from specialization, having multiple companies performing different claims administration tasks could easily create coordination difficulties for the contractors, providers, and CMS staff. For example, between 1997 and 2000, HCFA contracted with a claims administration contractor that subcontracted with another company for the review of the medical necessity of claims before they were paid. The agency found that having two different contractors perform these functions posed logistical challenges that could make it difficult to complete prepayment reviews without creating a backlog of unprocessed claims. The need for effective coordination was also seen in the PSC experience. PSCs and the claims administration contractors need to coordinate their activities in cases where the PSCs assumed responsibility for some or all of the program safeguard functions previously performed by the contractors. In these situations, HCFA officials had to ensure that active claims did not get lost or ignored while in the processing stream. Coordination is also necessary to ensure that new efficiencies in one program area do not adversely affect another area. For example, better review of the medical necessity of claims before they are paid could lead to more accurate payment. This would clearly be beneficial, but could also lead to an increase in the number of appeals for claims denials. Careful planning would be required to ensure adequate resources were in place to adjudicate those appeals and prevent a backlog. CMS has not stated how claims administration activities might be divided if the agency could do functional contracting. It would be wise for CMS to develop a strategy for testing different options on a limited scale. In our report on HCFA’s contracting for PSC services, we recommended, and the agency generally agreed, that it should adopt such a plan because HCFA was not in a position to identify how best to use the PSCs to promote program integrity in the long term. Taking advantage of benefits from competition and performance-based contracting hinges on being able to identify goals and objectives and to measure progress in achieving them. Specific and appropriate evaluation criteria would be needed to effectively manage any new arrangements under contracting reform. Effective evaluations are dependent, in part, upon clear statements of expected outcomes tied to quantifiable measures and standards. Because it has not developed such criteria for most of its PSC task orders, we reported that CMS is not in a position to effectively evaluate its PSCs’ performance even though 8 of the 15 task orders had been ongoing for at least a year as of April 2001. If CMS begins using full and open competition to hire new entities for other specific functions, it should attempt to move quickly to develop effective outcomes, measures, and standards for evaluating such entities. Effective criteria are also critical if financial incentives are to be offered to contractors. Prior experiments with financial incentives for Medicare claims administration contractors generally have not been successful. This experience raises concerns about the possibility for success of any immediate implementation of such authority without further testing. For example, between 1977 and 1986, HCFA established eight competitive fixed-price-plus-incentive-fee contracts designed to consolidate the workload of two or more small contractors on an experimental basis. Contractors could benefit financially by achieving performance goals in certain areas at the potential detriment of performance in other activities. In 1986, we reported that two of the contracts generated administrative savings estimated at $48 million to $50 million. However, the two contractors’ activities also resulted in $130 million in benefit payment errors (both overpayments and underpayments) that may have offset the estimated savings. One of these contractors subsequently agreed to pay over $140 million in civil and criminal fines for its failure to safeguard Medicare funds. Removing the contracting limitations imposed at Medicare’s inception to promote full and open competition and increase flexibility could help to modernize the program and lead to more efficient and effective management. However, change will not yield immediate results, and lessons learned from the experience with PSC contractors underscore the need for careful and deliberate implementation of any reforms that may be enacted. This concludes my statement. I would be happy to answer any questions that either Subcommittee Chairman or Members may have. For further information regarding this testimony, please contact me at (312) 220-7600. Sheila Avruch, Bonnie Brown, Paul Cotton, and Robert Dee also made key contributions to this statement.
Discussions about how to reform and modernize the Medicare Program have, in part, focused on whether the structure that was adopted in 1965 is optimal today. Questions have been raised about whether the program could benefit from changes to the way that Medicare's claims processing contractors are chosen and the jobs they do. Medicare could benefit from full and open competition and its relative flexibility to promote better performance and accountability. If the current limits on Medicare contracting authority are removed, the Centers for Medicare and Medicaid Services could (1) select contractors on a competitive basis from a broader array of entities capable of performing needed program activities, (2) issue contracts for discrete program functions to improve contractor performance through specialization, (3) pay contractors based on how well they perform rather than simply reimbursing them for their costs, and (4) terminate poor performers more efficiently.
Federal employees, including postal workers, are protected by a variety of laws against discrimination based on race, color, sex, religion, national origin, age, or disability. In addition, federal employees are protected from retaliation for filing a complaint, participating in an investigation of a complaint, or opposing a prohibited personnel practice. Federal employee EEO complaints are to be processed in accordance with regulations (29 C.F.R. part 1614) promulgated by EEOC. These regulations also establish processing time requirements for each stage of the complaint process. Under these regulations, federal agencies decide whether to dismiss or accept complaints employees file with them and investigate accepted complaints. After the investigation, a complainant can request a hearing before an EEOC administrative judge who may issue a recommended decision that the agency is to consider in making its final decision. An employee who is dissatisfied with a final agency decision or its decision to dismiss a complaint may file an appeal with EEOC.Generally, federal employees must exhaust the administrative process before pursuing their complaints in court. EEOC will be implementing changes to the complaint process beginning in November 1999. One of the most significant changes involves decisions issued by administrative judges. Under the regulations, these decisions would no longer be recommendations that agencies could modify. Rather, as its final action (as final decisions will be called), an agency would issue a final order indicating whether or not it would fully implement the administrative judge’s decision. If the agency chooses not to fully implement the decision, it will be required to file an appeal of the decision with EEOC. Complainants would retain their right to appeal an agency’s final order. For a further discussion of the complaint process and upcoming changes, see app. II. In July 1998, we reported on our analysis of inventories of unresolved EEO complaints at federal agencies and EEOC and how trends in the number of complaints filed and the time taken to process them had contributed to inventory levels. We found that agencies’ complaint inventories, and even more so, EEOC’s hearings and appeals inventories, had increased since fiscal year 1991; as the size of inventories grew, so did the average length of time that cases had been in inventory as well as the proportion of cases remaining in inventory longer than allowed by regulations; the size of the inventories and the age of cases in them increased as agencies and EEOC did not keep up with the influx of new cases; with the increased caseloads, EEOC and, to some extent, agencies, took longer on average to process complaints, contributing to the size and age of inventories; and the implications of these trends were that inventories of cases pending would grow even larger in the future, particularly at EEOC, and that cases would take even longer to process. In updating our analysis, we used preliminary data for fiscal year 1998 provided by EEOC and reviewed the agency’s budget request for fiscal year 2000 and its Annual Performance Plans for fiscal years 1999 and 2000. We also examined EEOC’s planned changes to the complaint process. In addition, because postal workers have accounted for about half of the complaints filed in recent years, we separately analyzed data reported by the U.S. Postal Service in order to compare statistics for the postal workforce with the nonpostal workforce (see app. III). Appendix I contains details about our scope and methodology. We requested comments on a draft of this report from the Chairwoman, EEOC, and the Postmaster General. Their comments are discussed near the end of this letter. We performed our work from March through May 1999 in accordance with generally accepted government auditing standards. Since we last reported in July 1998, agencies’ complaint inventories and, even more so, EEOC’s hearings and appeals inventories were, once again, higher. Table 1 shows the trends in the inventories of complaints at agencies and of hearing requests and appeals at EEOC for fiscal years 1991 to 1998. At agencies, the inventory of unresolved complaints had risen from 16,964 at the end of fiscal year 1991 to 34,286 by the end of fiscal year 1997. One year later, agencies’ inventories of unresolved complaints had increased by an additional 6 percent, to 36,333. Inventory levels increased at the Postal Service and nonpostal agencies in fiscal year 1998, but growth was more rapid in the nonpostal agencies. Compared with fiscal year 1997, the Postal Service inventory increased by 3.3 percent, from 13,549 to 13,996 (see app. III, table III.1), while the inventories at nonpostal agencies rose by 7.7 percent, from 20,737 to 22,337. Overall, from fiscal year 1991 to fiscal year 1998, complaint inventories at federal agencies rose by about 114 percent. The increase in agencies’ inventories was accounted for mainly by the growing number of the agencies’ cases pending a hearing before an EEOC administrative judge. An agency’s inventory of unresolved complaints is affected by EEOC’s handling of hearing requests because EEOC must resolve a hearing request before an agency can make a final decision on the complaint. Of the 36,333 cases in agencies’ inventories at the end of fiscal year 1998, 13,357 (about 37 percent) were awaiting a hearing before an EEOC administrative judge. The 13,357 cases awaiting a hearing before an EEOC administrative judge represented a 3,755 case (39 percent) increase over the fiscal year 1997 level of 9,602. The increase in the number of cases in the hearing stage more than offset reductions in the number of cases in agencies’ inventories at the initial acceptance/dismissal and final agency decision stages of the complaint process. At EEOC, the inventory of hearing requests, which had increased from 3,147 at the end of fiscal year 1991 to 10,016 at the end of fiscal year 1997, increased by an additional 19.5 percent, to 11,967, by the end of fiscal year 1998. Overall, from fiscal year 1991 to fiscal year 1998, EEOC’s hearing request inventory rose by about 280 percent. EEOC’s inventory of appeals, which had increased from 1,466 to 9,980 during fiscal years 1991 to 1997, increased by an additional 9.9 percent, to 10,966, by the end of fiscal year 1998. Overall, from fiscal year 1991 to fiscal year 1998, EEOC’s appeals inventory rose by 648 percent. (See app. IV, figure IV.2). As the size of the inventories continued to grow, so did the average length of time that cases, and the conflict underlying these complaints, remained unresolved. Table 2 shows the trends in the average age of complaints in agencies’ inventories and of hearing requests and appeals in EEOC’s inventories for fiscal years 1991 to 1998. The overall average age of unresolved complaints in agencies’ inventories, after declining through fiscal year 1994, reached a new level of 446 days at the end of fiscal year 1998. The age of cases varied by the stage of the complaint process. Table 3 shows the average age of complaints in inventory, from the time a complaint was filed, at various stages of the complaint process, both overall and at the Postal Service and nonpostal agencies at the end of fiscal year 1998. (Also see app. IV, figure IV.3 for trends in the average age of complaints in inventory at the various stages of the complaint process for fiscal years 1991 to 1998.) As table 3 shows, the complaints that were in agencies’ inventories the longest at the end of fiscal year 1998 were those awaiting a hearing before an EEOC administrative judge. The average age of cases awaiting a hearing had a significant impact on the overall average age of unresolved complaints in inventory, particularly at the Postal Service. Because cases remained in inventory for lengthy periods, agencies frequently did not meet the regulatory requirement that they dismiss or accept a complaint, investigate an accepted complaint, and report the investigation results to the complainant within 180 days from the filing of a complaint (see app. IV, figure IV.4). The proportion of cases pending the initial acceptance or dismissal decision more for than 180 days stood at 32.5 percent in fiscal year 1998. At the Postal Service, 65.5 percent of cases in the acceptance/dismissal stage had been in inventory more than 180 days at the end of fiscal year 1998 (see app. III, table III.3); the figure for nonpostal agencies was 26.2 percent. Of the complaints pending investigation, 48.3 percent had been in inventory more than 180 days. At the Postal Service, 36.5 percent of cases in the investigation stage had been in inventory more than 180 days at the end of fiscal year 1998 (see app. III, table III.3); the figure for nonpostal agencies was 52 percent. At EEOC, the average age of cases in both the agency’s inventory of hearing requests and its inventory of appeals was higher in fiscal year 1998 than in fiscal year 1997 (see table 2). The average age of hearing requests in inventory increased sharply, from 243 days in fiscal year 1997 to 320 days in fiscal year 1998. The figure for fiscal year 1998 is about 3 times what is was in fiscal year 1993, when the average age of a hearing request in inventory had reached a low of 105 days. As a result of the rising age of hearing requests in inventory, a greater proportion of these cases did not meet the requirement in EEOC’s regulations that administrative judges issue a recommended decision within 180 days of a request for a hearing. In fiscal year 1998, 56.2 percent of the hearing requests had been in inventory longer than the 180-day time limit, up from 50.3 percent the previous year. EEOC has had increasing difficulty meeting the 180-day requirement since fiscal year 1993, when 13.3 percent of hearing requests had been in inventory longer than the 180 days. (See app. IV, figure IV.6.) The increasing age of EEOC’s hearing request inventory has been a major factor in the size and age of cases in agencies’ inventories awaiting a hearing before an administrative judge. In contrast to hearing requests, table 2 shows a smaller increase in the average age of appeals in EEOC’s inventory, from 285 days in fiscal year 1997 to 293 days in fiscal year 1998 (see app. IV, figure IV.5). Nonetheless, the figure for fiscal year 1998 is more than 3 times what it was in fiscal year 1992, when the average age of appeals in inventory was 87 days. Although EEOC regulations prescribe time limits for processing hearing requests, they do not prescribe time limits for processing appeals. However, one indicator of the time it takes EEOC to process appeals is the percentage of cases remaining in inventory more than 200 days. EEOC’s data show that in fiscal year 1998, 58.5 percent of the appeals cases remained in inventory longer than 200 days, a slight increase from fiscal year 1997, when this figure was 58 percent. However, the figures for fiscal years 1997 and 1998 represent a substantial increase compared with fiscal year 1991, when only about 3 percent of appeals had been in inventory longer than 200 days. (See app. IV, figure IV.7.) The size of the inventories and the age of the cases in them continued their upward trend as agencies and EEOC did not keep up with the influx of new cases. As discussed later in this report, the increase in the number of complaints did not necessarily signify an equivalent increase in the actual number of individuals filing complaints. Table 4 shows the trends in the number of complaints filed with agencies and the number of hearing requests and appeals filed with EEOC for fiscal years 1991 through 1998. At agencies, the overall number of complaints, which had increased from 17,696 in fiscal year 1991 to 28,947 in fiscal year 1997, declined by 2.8 percent, to 28,147 in fiscal year 1998. At the nonpostal agencies, the number of new cases declined, from 14,621 in fiscal year 1997 to 13,750 in fiscal year 1998. During this period, however, the number of new complaints at the Postal Service increased slightly, from 14,326 to 14,397 (see app. III, table III.5). Overall, the number of complaints filed with federal agencies in fiscal year 1998 was 59.1 percent higher than in fiscal year 1991. At EEOC, requests for hearings, which increased from 5,773 to 11,198 during fiscal years 1991 to 1997, rose again, by 9.1 percent, to 12,218, in fiscal year 1998. Appeals to EEOC of agency decisions, however, which increased from 5,266 to 8,453 during fiscal years 1991 to 1997, increased only slightly, by three-tenths of 1 percent, to 8,480, in fiscal year 1998. Historically, the rate of growth in the number of hearing requests filed has outpaced that of appeals. Compared with fiscal year 1991, the number of hearing requests filed in 1998 was 111.6 percent higher; the comparable figure for appeals was 61 percent. More recently, since fiscal year 1995, the number of hearing requests filed increased by about 16 percent, while the number of appeals filed increased by about 4 percent. Postal workers continue to account for a large and disproportionate share of complaints, hearing requests, and appeals. In fiscal year 1998, postal workers represented about 32 percent of the federal workforce and accounted for about 51 percent of complaints, about 47 percent of hearing requests, and about 47 percent of appeals. (See app. III , tables III.4 and III.5.) With increasing caseloads since fiscal year 1991, agencies and EEOC have been taking longer on average to process complaints, contributing to the size and age of the inventories. Table 5 shows the average processing time for complaints at agencies and for hearing requests and appeals at EEOC for fiscal years 1991 to 1998. The overall average number of days agencies took to close a case, which had reached a low of 305 days in fiscal year 1995, was 384 days in fiscal year 1998. This represented a slight improvement over fiscal year 1997’s 391-day average. Average closure time varied according to the type of closure action. In addition to closing cases by dismissing them or by issuing final decisions on their merits (with and without a hearing before an EEOC administrative judge), an agency may settle a case with a complainant or a complainant may withdraw his or her complaint. Table 6 shows average closure time for each type of closure overall and at the Postal Service and nonpostal agencies in fiscal year 1998 (see app. IV, figure IV.10 for average closure time by type of case closure for all agencies for fiscal years 1991 to 1998). Table 6 shows that, in general, the Postal Service processed cases more quickly than nonpostal agencies in fiscal year 1998. One factor may have been that the Postal Service investigated complaints more quickly compared with nonpostal agencies. In fiscal year 1998, a complaint investigation at the Postal Service took an average of 174 days from the time a case was assigned to an investigator to when the investigation was completed. The comparable figure at nonpostal agencies was 283 days. Table 6 also shows that complaints with final agency decisions involving a hearing took the longest to close. This figure is affected by EEOC’s performance because a hearing precedes an agency’s final decision; the longer EEOC takes to process a hearing request, the longer it will take an agency to make its final decision. As will be discussed below, EEOC has been taking longer to process hearing requests. The increases in the amount of time to process cases were most apparent at EEOC. The average amount of time EEOC took to process a hearing request, which had increased from 173 days in fiscal year 1991 to 277 days in fiscal year 1997, increased further, to 320 days, in fiscal year 1998, well in excess of the 180-day requirement in regulations. Also, the time EEOC took to adjudicate an appeal, which had increased from 109 days in fiscal year 1991 to 375 days in fiscal year 1997, rose substantially in fiscal year 1998 to 473 days—or by 26 percent. Because of the length of time taken by agencies and EEOC to process cases, parties to a case traveling the entire complaint process—from complaint filing through hearing and appeal—could expect the case to take 1,186 days, based on fiscal year 1998 data. In fiscal year 1997, this figure was 1,095. The implications of these trends, at least in the short run, are that inventories of unresolved cases may grow even larger, particularly at EEOC, and that cases, as well as the conflicts underlying these cases, may take even longer to resolve than they currently do. The long-term outlook is uncertain. Only when EEOC and agencies are able to process and close more cases than they receive will progress be made toward reducing backlogs. The size of the caseloads will be influenced by the effect of revisions to the complaint process regulations and procedures, while agencies’ and EEOC’s capacity to process cases will be affected by available resources. EEOC projects that the number of new cases will continue to rise and exceed its capacity to process them, resulting in yet higher inventories and case processing times. EEOC’s projections, however, do not take into account how complaint process revisions may affect caseload trends and resource needs. In our July 1998 report about rising trends in EEO complaint caseloads, we reported that the increase in the number of discrimination complaints could be attributed to several factors, according to EEOC, dispute resolution experts, and officials of federal and private-sector organizations. One factor that experts and officials cited for the increase in complaints was downsizing, which resulted in appeals of job losses and reassignments. A second factor was the Civil Rights Act of 1991, which motivated some employees to file complaints by allowing compensatory damage awards of up to $300,000 in cases involving unlawful, intentional discrimination. A third factor was the Americans With Disabilities Act of 1990, which expanded discrimination protection. EEOC and Postal Service officials also said that the current regulations governing the EEO complaint process, implemented in October 1992, were a factor because they provided improved access to the complaint process. In a report we issued in May 1999, however, we said that there were several factors indicating that an increase in the number of complaints did not necessarily signify an equivalent increase in the actual number of individuals filing complaints. First, an undetermined number of federal employees have filed multiple complaints. EEOC officials and representatives of the Council of Federal EEO and Civil Rights Executives said that, while they could not readily provide figures, it has been their experience that a small number of employees—often referred to as “repeat filers”—account for a disproportionate share of complaints. A Postal Service official said that between 60 and 70 employees account for every 100 complaints filed. Additionally, an EEOC workgroup that reviewed the federal employee discrimination complaint process reported that the number of cases in the system was “swollen” by employees filing “spin-off” complaints—new complaints challenging the processing of existing complaints. Further, the work group found that the number of complaints was “unnecessarily multiplied” by agencies fragmenting some claims involving a number of different allegations by the same employee into separate complaints, rather than consolidating these claims into one complaint. In addition, there has been an increase in the number of complaints alleging reprisal, which, for the most part, involve claims of retaliation by employees who have previously participated in the complaint process. Further, in past reports and testimonies, we noted, among other things, that the discrimination complaint process was burdened by a number of cases that were not legitimate discrimination complaints; some were frivolous complaints or attempts by employees to get a third party’s assistance in resolving workplace disputes unrelated to discrimination.Similarly, EEOC reported in its 1996 study that a “sizable” number of complaints might not involve discrimination issues but instead reflect basic communications problems in the workplace. EEOC said that such issues may be brought into the EEO process because of a perception that there is no other forum to air general workplace concerns. The agency also said that there is little question that these types of issues would be especially conducive to resolution through ADR processes. EEOC will be implementing regulatory and procedural changes beginning in November 1999 to deal with some of the factors contributing to the volume of complaints flowing through the process. One change will allow agencies and administrative judges to dismiss spin-off complaints. Another change will allow agencies and administrative judges to dismiss complaints in which employees are abusing the process. The revised regulations and EEOC’s policies will deal with the problem of fragmented complaints. In addition, EEOC will require agencies to make ADR processes available to complainants. Among the factors that can affect inventory levels and case processing times is the relationship between the influx of cases and the capacity of staff to process them. Data that EEOC reports in the Federal Sector Report on EEO Complaints Processing and Appeals does not allow a precise comparison of the number of staff at agencies to caseloads at various stages of the complaint process. However, the data enable a comparison of EEOC’s hearing and appeal caseloads to the number of nonsupervisory administrative judges and attorneys available to process these cases. These data show that as the overall number of hearing requests received each year increased by 111.6 percent, from 5,773 in fiscal year 1991 to 12,218 in fiscal year 1998 (see table 4, p. 8), the number of administrative judges available for hearings increased at a lower rate (41.5 percent) during this period, from 53 to 75. These data also show that as the number of appeals increased by 61 percent, from 5,266 in fiscal year 1991 to 8,480 in fiscal year 1998 (see table 4, p. 8), the number of attorneys processing appeals actually declined, from 40 in fiscal year 1991 to 39 during fiscal years 1992 to 1998. Although EEOC officials recognized the need for additional staff to process hearings and appeals, they said that requested funds for the needed positions were not appropriated. At EEOC, the hearings and appeals inventories rose because the average caseload for each administrative judge and attorney outpaced increases in their productivity. The number of hearing requests received each year per administrative judge rose, from 109 in fiscal year 1991 to 163 by fiscal year 1998. The hearings inventory grew larger because although the average number of cases processed and closed each year per administrative judge increased, this figure was, except for fiscal year 1993, always less than the average number of requests received. In fiscal year 1991, administrative judges processed and closed 95 hearing requests, a figure that increased to 135 by fiscal year 1998. The situation for appeals was similar. The number of appeals received each year per attorney increased, from 133 in fiscal year 1991 to 217 by fiscal year 1998. The appeals inventory grew because the average number of cases processed and closed each year per attorney, was, except for fiscal year 1991, always less than the average number of appeals received. In fiscal year 1991, attorneys processed and closed an average of 133 cases, a figure that increased to 192 by fiscal year 1998. To deal with the imbalance between new cases and closures, EEOC’s fiscal year 1999 budget provided for an increase in its administrative judge and appeals attorney corps. Under the fiscal year 1999 budget, the authorized number of administrative judges increased by 19, from 75 to 94, while the authorized number of appeals attorneys increased by 14, from 39 to 53. Even with these added resources, the hearings and appeals inventories may continue to rise unless the flow of new cases is reduced. EEOC estimates that with the full complement of administrative judges on board in fiscal year 2000, it will be able to process and close 11,280 hearing requests, or 120 cases per judge, each year. This figure is 938 cases less than the 12,218 hearing requests EEOC received in fiscal year 1998. If, for example, the number of hearing requests received in fiscal year 2000 remained at fiscal year 1998 levels, EEOC’s hearings inventory would increase by 938 cases during the year, while the average time EEOC takes to process a hearing request would grow by about 30 days. Over 5 years, with no change in the number of new cases received each year or resources to process them, EEOC’s hearings inventory could increase by 4,690 cases, while adding 150 days to the average processing time. Similarly, when the full complement of appeals attorneys is on board by fiscal year 2000, EEOC estimates it will be able to process and close 7,685 appeals, or 145 cases per attorney, each year. This figure, however, is 795 cases less than the 8,480 appeals filed in fiscal year 1998. If, for example, the number of appeals filed in fiscal year 2000 remained at fiscal year 1998 levels, EEOC’s appeals inventory would increase by 795 cases during that year, while the average processing time would increase by about 37 days. Over 5 years, with no change in the number of new cases filed each year or resources to process them, EEOC’s appeals inventory could increase by 3,975, while adding about 186 days to the average processing time. While our analysis assumed no increase in the number of new cases, EEOC’s fiscal year 2000 budget request projects that incoming hearing requests and appeals would rise at an annual rate of 3 percent, and exceed the number of cases it can close. As a result, according to the agency, hearings and appeals inventories and processing times will continue to climb, further affecting the agencies’ inventories and case processing times. To deal with this situation, EEOC’s fiscal year 2000 budget proposal requests funding for 19 additional administrative judges to process hearing requests and 13 additional attorneys to process appeals. The agency projects that with these additional resources, the hearings and appeals inventories and processing times would initially decline in fiscal year 2000, only to begin rising again in fiscal year 2004. Neither our analysis nor EEOC’s projections and requested funding increase take into account, however, the possible effects of changes to program regulations and procedures intended to reduce the number of cases flowing into and through the complaint process. Since EEOC’s workload is dependent on the number of cases in the pipeline at agencies, it is important to understand how the program changes are likely to affect caseloads at agencies. The requirement that agencies offer ADR processes to employees, including in the counseling phase before a formal complaint is filed, should resolve some workplace disputes without a complaint being filed and resolve other disputes in the early complaint stages. Other changes allowing dismissal of spin-off complaints and other complaints in which an employee is believed to be abusing the process should halt the processing of these cases early in the process and possibly discourage the filing of such complaints. In addition, policies to prevent agencies from fragmenting cases should also reduce the number of new complaints. However, although EEOC designed its changes to program regulations and procedures to reduce the flow of new cases, it has not estimated the likely effect of these changes on the volume of complaints. EEOC officials explained that they had been deferring developing estimates until the regulations had been approved because of how the details of the final regulations could affect caseload estimates. They also said that although one goal of the regulations is to reduce caseloads, another goal is to improve the fairness of the process. The EEOC officials said that one measure to improve fairness is to remove agencies’ ability to reject or modify administrative judges’ decisions in arriving at final decisions. The officials said that complainants could view this change as giving the administrative judges more authority, and they speculated that more complainants might seek a hearing. Estimates of the expected changes in complaint levels are important because a decrease in new complaints would affect how quickly EEOC might be able to reduce its inventories, and thus how many, if any, additional staff would be needed and for how long. EEOC’s Compliance and Control Division Director said that it would be appropriate to consider the effects of these changes when the agency prepares its fiscal year 2001 budget request. Because the changes could begin affecting complaint levels in fiscal year 2000 and because any new staff, if not hired on a temporary basis, could be with EEOC a long time, estimates of likely changes in complaint levels also could be important to congressional consideration of EEOC’s future budget requests. EEOC also has not completed the development of the measures and indicators that it will use in the future to gauge the actual effect of the changes. In its fiscal year 1999 annual performance plan, EEOC said that it would develop measures and indicators for assessing the effectiveness of these revisions, which, according to the agency’s fiscal year 2000 Annual Performance Plan, would be implemented in fiscal year 2000. Rising inventory levels of unresolved EEO complaints and lengthy case processing times to resolve these workplace disputes remain stubborn problems for agencies and EEOC. The struggle of nonpostal agencies was especially evident in that their inventories rose by almost 8 percent in fiscal year 1998 despite a 6 percent decline in new complaints. Similarly, despite increases in its productivity, EEOC’s appeals inventory increased by almost 10 percent in fiscal year 1998, even though the number of appeals filed remained almost unchanged. At the same time, EEOC’s inventory of hearing requests rose by almost 20 percent, about twice the rate of increase in new hearing requests that the agency received. How long present conditions will continue, and whether they will improve or deteriorate further, depends on the ability of agencies and EEOC to process cases currently in the complaint pipeline as well as on the volume of new complaints entering the pipeline in the future. Future trends and, therefore, agencies’ and EEOC’s resource needs, are likely to be affected by the revisions to the complaint process. However, EEOC has not developed estimates of the extent to which revisions to complaint process regulations and procedures may affect the flow of cases into and through the process. Among the changes, the requirement that agencies offer ADR to complainants could reduce the number of new cases filed, or resolve disputes in the early stages. In addition, other changes to be implemented dealing with fragmenting of complaints, spin-off complaints, and abuse of process could reduce the number of new complaints or short-circuit them early in the process. EEOC’s request for additional funding for attorneys and judges and the implementation of changes to program regulations and procedures in November 1999 lend urgency to gaining an understanding of the likely effects of the proposed changes on the complaint process and complaint inventories. In addition, until the measures and indicators promised in EEOC’s fiscal year 1999 Annual Performance Plan are developed and implemented, the actual effect of the revisions on the EEOC complaint process will be difficult to track. Estimates of the effect of the changes combined with anticipated productivity levels could be used to further estimate the resources needed to reduce EEOC’s inventory of hearing requests to levels that would allow the average case to be processed within the 180-day requirement in regulations. In addition, current regulations do not prescribe a processing time standard for appeals, which could be used to establish and develop estimates of the resources needed to reduce the average appeal processing time to an acceptable level of timeliness. In the case of both hearing and appeal processing, the estimates could be useful in determining how many, if any, additional staff are needed to reduce the backlogs and whether the staff should be a permanent or temporary addition to EEOC’s workforce. Given the size of the backlogs, estimates for reducing them to acceptable levels over different time frames could allow EEOC and Congress to weigh the trade-offs between additional cost and the rapidity with which the inventory of cases is resolved. Measures and indicators to assess the actual effect of changes in program regulations should be adopted before the changes are implemented to ensure that consistent data are collected from the start and to ensure that systems are in place to generate valid and reliable data. To provide Congress with a clear picture of future caseload trends and the resources that are needed to deal with current backlogs, as well as the volume of cases expected in the future, we recommend that the EEOC Chairwoman take steps to (1) develop estimates of the effects of the forthcoming changes in program regulations and procedures on agencies’ and EEOC’s caseloads and (2) complete development of measures and indicators to track and assess the impact of these revisions on caseload trends. We also recommend that the Chairwoman use these data to develop estimates, under various time frames, of the resources needed to reduce its average hearings processing time to meet the 180-day requirement in regulations. We further recommend that the Chairwoman establish a policy of an acceptable level of timeliness for processing appeals and develop estimates, under various time frames, of the resources needed to reduce its average appeals processing time to meet this standard. We received comments on a draft of this report from EEOC and the Postal Service. The EEOC Chairwoman said in her written comments (see app. V) that she shared our concerns that complaint inventories are too high and that federal employees wait far too long for their complaints to be processed by their agencies and EEOC. She said that analyses of the kind in our 1998 report on rising EEO complaint caseloads in the federal sector had persuaded her that bold steps were necessary to bring about improvements. She said that, in addition to the changes in regulations, EEOC is implementing a comprehensive, strategic approach to link the hearings and appeals programs with strong oversight, technical assistance, and educational initiatives. These efforts are to include on-site reviews, which EEOC believes are one of the most important vehicles with which to focus on and correct root causes of persistent problems. Also, the Chairwoman said that with additional resources, EEOC would increase its efforts on conflict prevention and early intervention, since these are the most cost-effective ways to reduce inventories. Further, the Chairwoman pointed out that EEOC, with the National Partnership for Reinventing Government (NPR), is cosponsoring the Interagency Federal EEO Task Force that will look into ways to enhance the fairness, efficiency, and effectiveness of the federal employee EEO complaint process. EEOC also responded to the first three of our four recommendations that it (1) develop estimates of the effects of changes in regulations on caseloads, (2) complete development of measures and indicators to track and assess the impact of these revisions, and (3) develop estimates of the resources needed under various time frames to reduce hearings and appeals processing times. EEOC said that right now it would be premature and highly speculative for the agency to venture guesses on what the actual experiences under the revised regulations might be. In addition, EEOC said that it was not possible to develop measures and indicators for assessing the effectiveness of the revisions to the federal sector EEO complaint process before the draft regulations were approved. However, with the publication of the final rules in the Federal Register on July 12, 1999, EEOC said that it expects to complete development of the measures and indicators by the end of fiscal year 1999. The Chairwoman added, however, that other complex issues must be resolved, including how baseline data will be collected and what data collection method will be used. Consequently, she said that the first year for which data will be collected on experiences under the revised regulations will be fiscal year 2001. She said that when these data are available at the end of calendar year 2001, it would be possible to estimate resource requirements under various time frames. The Chairwoman further said that these data would be used to prepare EEOC’s fiscal year 2004 budget request, which would be submitted to the Office of Management and Budget in September 2002 and to Congress in early 2003. We continue to believe that in order for Congress to carry out its oversight and appropriation responsibilities and make informed budget decisions, it needs timely estimates from EEOC of how changes in the complaint process may affect caseloads and resource requirements. Further, we believe congressional decisionmaking would benefit from EEOC’s best estimate of the resources needed under various time frames to reduce hearings and appeals processing times to acceptable levels. With such estimates, Congress could consider options to deal with this serious situation. We recognize that early estimates may be inexact. However, without any estimate of the effect the new regulations may have on caseloads and of information on how quickly, if at all, additional staff might be able to reduce the current case backlogs, Congress has no basis to judge whether requested resources to increase staffing are reasonable. Although initial estimates of necessity involve considerable judgment, we believe it would be better to offer estimates than to provide no perspective on the regulations’ anticipated effect. Estimation is an iterative process, and EEOC can improve the precision of its estimates as more and better data become available. The Chairwoman said that EEOC will explore alternative means for obtaining feedback on the kinds of changes that may flow from the revised regulations. In addition to EEOC examining its own caseloads, such alternatives, we believe, could include obtaining data during on-site visits, through the NPR/EEOC Interagency Federal EEO Task Force, or through informal surveys of agencies. As EEOC and agencies scrutinize inventories to see how the new provisions apply to existing cases, such data-gathering initiatives could yield increasingly reliable and timely information on the effects of the new provisions. In response to our fourth recommendation that an acceptable level of timeliness be established for the processing of appeals, the Chairwoman said that 180 days is an appropriate goal. She did not say how this goal might be operationalized. We believe that such a goal would carry more significance and accountability if it were articulated in writing as a policy, such as by inclusion in EEOC’s annual performance plan. In oral comments on a draft of this report made on July 7, 1999, the Postal Service Manager, EEO Compliance and Appeals, concurred with our observations. He added that the Postal Service will be in compliance with the new EEOC regulation requiring that ADR be available to complainants because of its REDRESS (Resolve Employment Disputes, Reach Equitable Solutions Swiftly) program. In a separate discussion, the Postal Service’s National REDRESS Program Manager said that the program, which uses outside mediators in the precomplaint stage, was fully implemented as of July 1999. She provided statistics showing that during the first 10 months of fiscal year 1999—a period during which the program was still being rolled out—there were about 17 percent fewer formal EEO complaints, compared with the same period in fiscal year 1998 (7,050 versus 8,522). She and the EEO Compliance and Appeals Manager said this decline was in “large measure” due to the REDRESS program. The EEO Compliance and Appeals Manager also said that the Postal Service was expanding ADR to complaints awaiting a hearing before an EEOC administrative judge. He said that pilot programs have shown promise in reducing the inventory of complaints at this stage, with about one-third of the cases reviewed found to be candidates for settlement and another one-third found to be candidates for mediation. The remaining one-third, he said, will probably go to hearing. The official said that agencies have a responsibility to address these cases and can play an important role in reducing not only their own caseloads, but EEOC’s as well. The implications of the Postal Service’s experience with ADR, if the reported results are sustained, are significant for several reasons. First, they show that an agencywide ADR program to resolve disputes at an early stage can reduce the number of formal complaints. Second, because postal workers account for about half of the EEO complaints filed by federal employees, a substantial reduction in the number of formal complaints by postal workers could mean a reduction in the number of cases entering EEOC’s hearings and appeals pipeline. Third, the Postal Service’s limited experience, under its pilot programs, of applying ADR to cases awaiting a hearing show that some portion of this inventory can be resolved without using EEOC hearing resources. Although the Postal Service has not had broad experience with applying ADR to cases awaiting a hearing, the experiences of the Merit Systems Protection Board (MSPB) may be instructive to agencies and EEOC in establishing dispute resolution strategies and allocating resources. MSPB has had a long-established policy of trying to settle cases it does not dismiss on jurisdictional or timeliness grounds. Over the past 10 years, MSPB has avoided hearings by settling about half of employee appeals of personnel actions. We are sending copies of this report to Senators Daniel K. Akaka, Thad Cochran, Joseph I. Lieberman, and Fred Thompson; and Representatives Robert E. Andrews, John A. Boehner, Dan Burton, William L. Clay, Chaka Fattah, William F. Goodling, Steny H. Hoyer, Jim Kolbe, John M. McHugh, David Obey, Harold Rogers, Joe Scarborough, Jose E. Serrano, Henry A. Waxman, and C. W. Bill Young in their capacities as Chair or Ranking Minority Members of Senate and House Committees and Subcommittees. We will also send copies to the Honorable Ida L. Castro, Chairwoman, EEOC; the Honorable William J. Henderson, Postmaster General; the Honorable Janice R. Lachance, Director, Office of Personnel Management; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will make copies of this report available to others on request. If you or your staff have any questions concerning this report, please contact me or Assistant Director Stephen Altman on (202) 512-8676. Other major contributors to this report were Anthony P. Lofaro, Gary V. Lawson, and Sharon T. Hogan. As with our previous report about complaint caseloads, we developed information on complaints falling within the jurisdiction of the Equal Employment Opportunity Commission (EEOC), and not the Merit Systems Protection Board (MSPB), because (1) the vast majority of discrimination complaints fall within EEOC's jurisdiction and (2) concerns about case inventories and processing times raised in hearings before the House Subcommittee on Civil Service focused on complaints within EEOC's jurisdiction. We updated (1) trends in the size of inventories and the age of cases in inventory at the various stages of the equal employment opportunity (EEO) complaint process and (2) trends in the number of complaints filed by federal employees and the time taken by agencies and EEOC to process them to include fiscal years 1991 through 1998. Agencies' complaint data for fiscal year 1998, which EEOC provided and which we used in our analysis, were preliminary. We selected 1991 as a base year because it preceded intensive government downsizing, the implementation of new laws expanding civil rights protections and remedies, and the implementation of new regulations governing the federal employee EEO complaint process. Because postal workers accounted for about half the complaints filed since fiscal year 1995, we separately analyzed data reported by the Postal Service in order to compare statistics for the postal workforce with the nonpostal workforce. To update and analyze information about (1) the trends in the size and age of complaint inventories and (2) the number of complaints filed by federal employees and the amount of time taken by federal agencies and EEOC to process them, we obtained data reported (1) to EEOC by the Postal Service and other agencies and (2) by EEOC in its annual Federal Sector Report on EEO Complaints Processing and Appeals. We did not verify the data in EEOC's reports or data provided by the Postal Service. To make observations about the implications of the trends, we drew upon our analysis of the trend data, our past work, and discussions with EEOC officials. In addition, we reviewed EEOC’s budget request for fiscal year 2000 and its annual performance plans for fiscal years 1999 and 2000. We also reviewed changes to the regulations governing the federal employee complaint process (29 C.F.R. part 1614) that are to be implemented beginning in November 1999. We have previously noted limitations to the data presented in our reports because of concerns about the quality of data available for analysis.Although we have no reason to question EEOC’s statistics about its own hearings and appeals activities, we had identified errors and inconsistencies in the data on agencies’ inventory levels and on the age of cases in inventory. Because EEOC had not verified the data it received from agencies, it is possible that other data problems may have existed. EEOC corrected the errors we identified and, in response to a recommendation we made, said that it would take action to address our concerns about data consistency, completeness, and accuracy. Before providing the fiscal year 1998 agency data to us, EEOC reviewed agencies’ hard-copy submissions of complaint statistics and compared these data to statistics the agencies provided in an automated format. EEOC also tested the accuracy of its computer program to aggregate the data submitted by agencies. In response to our recommendation in an earlier report, before it publishes the complaint statistics in the fiscal year 1998 Federal Sector Report on EEO Complaints Processing and Appeals, EEOC said it would visit selected agencies to assess the reliability of the reported data. On balance, total caseload data currently available, while needing further quality assurance checks, present useful information on the volume of complaints actually being processed in the federal EEO complaint system. We performed our work in Washington, D.C., from March through May 1999 in accordance with generally accepted government auditing standards. Agencies and EEOC process federal employees’ EEO complaints under regulations promulgated by EEOC, which also establish processing time standards. Employees unable to resolve their concerns through counseling can file a complaint with their agency, which either dismisses or accepts it (the first stage) and, if the complaint is accepted, conducts an investigation (the second stage). Agencies are to decide whether to accept a complaint, investigate it, and report investigation results within 180 days from the complaint’s filing. After receiving the investigation results, an employee who pursues a complaint has two choices: (1) request a hearing before an EEOC administrative judge (the third stage) who issues a recommended decision, which the agency can accept, reject, or modify in making its final decision or (2) forgo a hearing and ask for a final agency decision (the fourth stage). An employee has 30 days to make this decision. When a hearing is requested, the administrative judge is to issue a recommended decision within 180 days of the request. An agency is to issue its final decision within 60 days of receiving an administrative judge’s recommendation or a request for a final decision. Up to this point, EEOC standards have allowed complaint processing to take up to 270 days without a hearing, 450 days with one. An employee dissatisfied with a final agency decision or its decision to dismiss a complaint may appeal to EEOC, which is to conduct a de novoreview (the fifth stage). The employee has 30 days to file an appeal, but regulations do not establish time standards for EEOC’s review. The final (sixth) stage within the administrative process is that the complainant or agency may request EEOC to reconsider its decision from the appeal within 30 days of receiving the decision. However, regulations do not establish time standards for the EEOC’s reconsideration. EEOC will be implementing revisions to the regulations, including changes to hearing and appeal procedures, beginning in November 1999. Under the new rules, administrative judges will continue to issue decisions on complaints referred to them for hearings. However, agencies will no longer be able to modify these decisions. Instead, as its final action (as final decisions will be called), an agency will issue a final order indicating whether or not it will fully implement the administrative judge’s decision. If the agency does not fully implement the decision, it will be required to file an appeal of the decision with EEOC. Employees will retain the right to appeal an agency’s final action to EEOC. In addition, the decision on an appeal from an agency’s final action will be based on a de novo review, except that the review of the factual findings in a decision by an administrative judge will be based on a substantial evidence standard of review. Table III.1: Total and Postal Service Inventories of Compliants, Hearing Requests, and Appeals and Postal Service as a Percentage of the Totals for Fiscal Years 1991-1998 Not available. No cases reported. When the agency notified the complainant in writing of its proposed disposition of the complaint and of the right to a final decision with or without an EEOC hearing. Discontinued as a reporting category. No cases reported. The following figures show the trends in (1) inventories of unresolved equal employment opportunity (EEO) complaints at federal agencies and the Equal Employment Opportunity Commission (EEOC); (2) the age of cases in the inventories; (3) the number of complaints, hearing requests, and appeals filed; and (4) processing times for complaints, hearings, and appeals. Figure IV.3: Average Age of the Complaint Inventory at Agencies FYs 1991 - 1998 Not reported. Separate data not reported for closures with and without hearings. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Equal Employment Opportunity Commission's (EEOC) complaint caseload, focusing on: (1) trends in the size of inventories and the age of cases in inventory at various stages of the EEO complaint process; (2) trends in the number of complaints filed by federal employees and the time taken by agencies and EEOC to process them; and (3) implications of these trends and how future caseloads may be affected by EEOC's regulatory changes to the complaint process. GAO noted that: (1) inventories of unresolved federal sector discrimination cases at agencies and EEOC have continued to grow; (2) overall, from fiscal year (FY) 1991 to FY 1998, complaint inventories at federal agencies rose by about 114 percent, to 36,333; (3) at EEOC, during the same period, the hearings inventory rose by 280 percent, to 11,967, while the appeals inventory went up by 648 percent, to 10,966; (4) as inventories grew, the average age of cases in agencies' inventories (446 days) and EEOC's hearings (320 days) and appeals (293 days) inventories also reached new levels; (5) the size of the inventories and the age of cases in them continued their upward trend during FY 1998 as neither the agencies nor EEOC kept up with the influx of new cases; (6) agencies' inventories grew by 6 percent in FY 1998 despite a 2.8 percent decline in the number of new complaints; (7) the growth in EEOC's inventory of hearing requests during this period--19.5 percent--was greater than the increase in the number of new hearing requests, which rose by about 9.2 percent; (8) at the same time, EEOC's appeals inventory increased by 9.9 percent, even though the number of new appeals filed remained almost unchanged; (9) the average time to process a complaint at agencies showed a small decline in FY 1998, from 391 to 384 days, but there were sharp increases in the average time EEOC took to process hearing requests (rising from 277 to 320 days) and appeals (rising from 375 to 473 days); (10) a case travelling the entire complaint process could be expected to take 1,186 days to process, based on FY 1998 data; (11) this was 91 days longer than in FY 1997; (12) the logjams at EEOC and agencies are likely to persist, at least in the short run, as long as agencies and EEOC receive more new cases than they process and close; (13) the long-term outlook, however, is unclear; (14) substantive revisions to complaint program regulations and procedures are to be implemented beginning in November 1999; (15) these revisions are intended to reduce the volume of cases flowing through the complaint process; (16) the revisions include a requirement that agencies offer alternative dispute resolution, as well as other rules to reduce the opportunities for multiple complaints by the same complainant; and (17) however, EEOC has not yet developed estimates of how the revisions to program regulations will affect caseload trends and resource needs, nor has the agency completed development of measures and indicators to track the effects of these revisions once they are implemented.
Consumers generally subscribe to broadband Internet in two ways: Mobile: Traditionally, mobile providers sold access to the Internet as an add-on to mobile telephone service plans that may or may not include a multiyear contract. Mobile service is provided through cell tower coverage with data sharing through radio spectrum. Because of a number of factors, including the number of users sharing certain parts of the network and the amount of data transmitted, mobile networks can experience congestion, or a slowdown in connection speeds. Subscribers can connect a variety of devices directly to mobile networks (see fig. 1). Fixed: In-home fixed Internet plans are often sold as a monthly subscription by cable television or telephone companies. Service from cable television companies is generally provided through the same coaxial cables that deliver television programming. Service from telephone companies is generally provided through the telephone lines (digital subscriber line service) that provide telephone voice services or fiber-optic lines which convert electrical signals carrying data into light and send the light through glass fibers. These network technologies generally have higher data transfer rates than mobile networks. Consumers can connect a variety of devices to in-home fixed networks through a wired connection or wireless Wi-Ficonnection (see fig. 2). Consumers are increasingly using the Internet to supplement or replace their use of traditional services, such as traditional telephone and cable TV service. Different types of Internet applications use varying amounts of data. Figure 3 below shows selected examples of how much data certain applications use, as reported by fixed Internet providers on their websites. Internet applications that use small amounts of data can include e-mail. Applications that can use large amounts of data include streaming video—more than a gigabyte per use of the application. Mobile and fixed Internet data use is likely to grow significantly in the future. Cisco projects consumer fixed data usage to grow at an annual rate of 21.5 percent from 2013 to 2018. Cisco also projects that mobile data usage in North America will grow at an average of 50 percent annually from 2013 to 2018 while Ericsson— a provider of network infrastructure—projects North American mobile data growth of 38 percent per year between 2013 and 2019.including increased adoption of smartphones and tablets, growth in streaming video, innovation in services and applications, growth in subscriber numbers, and the development of faster networks. Although Internet providers have expanded their networks’ capacity, the potential for in-network congestion that could slow the transfer of data has This trend is due to multiple factors increased due to a number of factors including the increase in streaming video and multiple broadband devices. Usage-based pricing (UBP) is the practice of pricing or otherwise adjusting service, such as connection speed, based on the volume of data transmitted. According to FCC’s Open Internet Advisory Committee, UBP by Internet providers can be used through data allowances (sometimes referred to as “caps”) in which a subscriber obtains a defined amount of data per month. could face a number of actions including additional charges for additional data or a reduction in connection speeds—known as “throttling.” Members of the Open Internet Advisory Committee include consumer advocates, engineers, content providers, service providers, and equipment manufacturers. The Committee aids FCC in tracking developments related to the openness of the Internet. Federal Communications Committee, Open Internet Advisory Committee, Open Internet Advisory Committee 2013 Annual Report (Aug. 20, 2013). seeks to accomplish that goal by collecting data, monitoring broadband availability, and establishing rules for transparency in internet service, among other things. Mobile: Based on our analysis of data plans, all four mobile providers we reviewed now offer some form of UBP Internet plans. Of these four providers: Two offer data allowances ranging from a low of 250 or 300 megabytes (MB) to a high of 100 gigabytes (GB) a month whereby larger allowances cost more overall (though less on a per MB or GB basis) and customers can share one data allowance among multiple devices. They impose overage charges (which can range from $20 for 300 MB on small data plans to $10 or $15 for 1 GB of additional data on larger data plans) for customers who exceed their allowance. One offers a variety of data plan options, all of which feature unlimited data. Customers can select unlimited high-speed data or high-speed data allowances ranging from 1 GB to 21 GB. Once customers reach their high-speed data allowance, they may continue to access unlimited data, but at slower connection speeds of 128 kilobits per second or less, slower than speeds FCC recommends for browsing the web or downloading e-mail. One offers unlimited data plans but also usage-based plans with allowances ranging from 1 GB to 120 GB per device. Customers who exceed their data allowance are subject to overage charges of 1.5 cents per MB. All four offer their data plans equally across their entire customer base in all markets they serve. Fixed: Based on our analysis of data plans, seven of the 13 fixed Internet providers we reviewed are now offering, to some extent, Internet plans that include elements of UBP. Of these seven providers: Three use data allowances (ranging from 150 GB to 4,000 GB) where higher allowances are generally tied to faster connection speeds and higher overall prices (though less on a per GB basis). They impose fees on customers who exceed their allowance (generally starting at $10 for an extra 50 GB). Two have data allowances, but do not impose fees or normally take other action when customers exceed their data allowance. One offers a low-data allowance option (5 GB or 30 GB per month off the normal prices for some of its unlimited data plans. Customers who exceed their data allowance are assessed overage fees of $1 per GB over the limit, with a maximum charge of $25 per month. ) at a discounted rate ($8 or $5 off a month respectively) One is testing multiple UBP approaches in 14 select markets of varying sizes around the country. This provider is testing plans including those with allowances and overage fees. Allowances are generally 300 GB regardless of the plan’s connection speeds (in one market allowances can increase to 600 GB, depending on plan speed) and overages are generally $10 for 50 GB of additional data. In some of these markets, this provider offers a low-data plan at a discount (a $5 discount off the regular price for unlimited data for a 5 GB data allowance with overage charges of $1 per GB). According to one fixed provider’s website, 5 GB of data is equal to about 3 hours of streaming high-definition video or streaming 1,250 songs. Aside from the provider that is testing UBP in selected markets, all providers that use UBP do so in all markets they serve across their entire customer base or plan do to so in the future. Both mobile and fixed Internet providers have increasingly been moving from unlimited data plans to usage-based plans. Mobile: Two mobile providers, which together have over 68 percent of the mobile market, first introduced UBP in 2010 and 2011 and no longer offer unlimited data plans.unlimited data but, according to officials we interviewed, first introduced UBP plans in 2013 to provide greater choice to consumers and to compete with other mobile provider plans. Since first introducing UBP-based plans, all four providers have increased both the variety of data plans and the levels of data allowances in these plans. For example, one provider recently offered a promotion that doubled data allowances for selected data plans without increasing prices. Fixed: Most fixed Internet providers that have UBP introduced their current data plans in the past 4 years. However, unlike mobile providers, not all fixed providers that use UBP have increased their data allowances on existing plans since 2012. Some higher speed, higher priced plans introduced since 2012, however, have come with higher data allowances than previously existing plans. The number of providers that utilize UBP and, therefore, the number of Internet customers that are affected by it, could grow in the future. While providers we interviewed that do not use UBP said they have no plans to introduce such plans in the future, they added that they will continue to track the market and would not rule out using UBP. Given that, under UBP, the price consumers pay for Internet access can depend on how much data they use, it may be important for consumers to have a thorough understanding of their data needs and usage. Under unlimited data plans, consumers do not need to necessarily be aware of their data usage as the price they pay for service is unrelated to their data usage. But under UBP, for example, if consumers do not understand their data usage, they may choose plans that include allowances that are too large—and cost more—than needed. Alternatively, they may purchase too little data and potentially face overage charges. Furthermore, “hidden” data uses—such as automatic updates and applications that push content to devices that consumers are unaware of—could represent as much as 30 percent of data use, meaning consumers could use large amounts of data without their knowledge. Both mobile and fixed providers that use UBP offer a variety of tools to their customers to help them understand, and estimate, their data usage; however, fixed providers do so to a lesser extent. Tools that providers offer to customers include: tools to estimate data usage based on the consumer’s estimate of their usage, including monthly e-mails, web pages, and videos; discussions with customer service representatives on the appropriate data plan, given their estimated or prior usage; web-based tools and details on customer bills regarding actual current and historic usage; and alerts—such as through email or texts—when approaching or exceeding data allowances. However, some of these efforts may have limited value for fixed Internet consumers due to a number of potential weaknesses. First, different providers may provide varying estimates of data usage for similar applications, as shown in figure 4. While these differences may be due to different technologies providers use, varying estimates could be confusing to consumers. Second, provider estimates of data usage of certain applications, and the large variation in such estimates such as streaming video as seen below, could be a factor in consumers’ decisions regarding what data plan to subscribe to. Estimates provided by providers can differ from one customer support document to another; we found one provider that estimated 4 GB of data usage for a 1-hour movie in one document available to consumers, but in another document estimated only 1.5 GB. As discussed later in this report, participants in our focus groups expressed confusion regarding their data usage, including the amount of data that certain types of applications use. Third, data usage meters—which are used by providers to measure how much data their customers use—may be inaccurate. An official with a company that conducts internal audits of fixed-Internet-provider data meters and their integration into databases used for billing systems told us that while some of their audits have shown that meters are accurate, others have shown the need for improvements that those providers are in the process of making. All four mobile providers we reviewed agreed to the voluntary Consumer Code for Wireless Service, which aims to help consumers make informed choices when selecting and managing their mobile services. This code, developed as a joint effort between FCC and CTIA—a group representing the mobile communications industry whose members cover the majority of mobile subscribers—gives providers guidelines for notifying consumers about data use through means such as text alerts, encouraging them to provide useful and consistent information to consumers. As mentioned earlier, one goal of FCC is to protect the public interest in the provision of telecommunications services. According to FCC officials, following the implementation of this code by providers, complaints to FCC by mobile customers regarding overage charges on bills dropped. Although fixed providers make certain consumer education efforts regarding data use—such as providing alerts—there has not been a formal effort facilitated by FCC like with the Consumer Code for Wireless Service. According to FCC officials, FCC has focused its effort so far on mobile Internet as more mobile consumers are affected by usage-based pricing at this time. According to FCC officials, the volume of consumer complaints filed with FCC regarding fixed-Internet UBP is relatively low when compared with the overall number of complaints concerning broadband service. However, the low number of complaints does not necessarily mean that consumers are satisfied with the information provided by their fixed providers. We have previously found that consumers may not know that they can file complaints with FCC. result of the lack of a fixed provider code of conduct, information provided to consumers is not always consistent or easy to understand. This could result in a lack of consumer education regarding data, a lack that, as mentioned earlier, could lead to consumers not purchasing their ideal data plan. We recommended that FCC should clearly inform consumers about its complaint process. FCC is currently in the process of reforming its consumer complaint process and expects this effort to be completed by the end of 2014. GAO, Telecommunications: FCC Needs to Improve Oversight of Wireless Phone Service, GAO-10-34, (Washington, D.C.: Nov. 10, 2009). Under the Open Internet Transparency Rule, providers must be open and transparent about the terms and conditions of their Internet plans. FCC issued an FCC Enforcement Advisory to Internet providers in July 2014 summarizing the general requirements of this rule. However, the advisory did not provide details regarding specific information that providers must disclose and at what level of detail. Furthermore, fixed providers have received low levels of satisfaction in customer surveys. The American Customer Satisfaction Index (ACSI) notes that as the number of Internet users has grown and exceeded the number of households that have landline service, customer satisfaction with fixed providers has decreased. Concerns about Internet data aspects may also be reflected in consumers’ rating Internet providers the lowest among 43 household consumer industries in the ACSI. In particular, consumers report low levels of satisfaction with “ease of understanding their bill” and “variety of plans.” A common sentiment expressed by focus group participants was that they had no idea how much data they use at home—likely in large part because they have never been subject to data allowances and, therefore, have not needed to consider their data usage at home. One participant, for example, said, “It’s unlimited… so we don’t really pay attention.” In all eight groups, participants said that they frequently connect their mobile devices to their in-home Wi-Fi without considering the amount of data these devices use on their in-home network. In seven of the eight groups we heard from, participants who said that they would face challenges in tracking the data usage of the multiple people and the multiple devices in their household. For example, having to tell other family members, such as children, to reduce their data usage was cited as a challenge. In addition, focus group participants exhibited many instances of confusion and a lack of understanding of their fixed-Internet data usage. For example, some participants were concerned that fixed Internet UBP might require them to limit data-light activities such as online shopping in order to avoid exceeding their data allowance when in reality any normal level of those activities would not likely result in a user nearing even a low data allowance. Others classified themselves as heavy data users during our initial screening process, despite the fact that during group discussions they said that they primarily use low-data applications such as online shopping. Therefore, those participants could potentially benefit from a lower-priced low-data plan as opposed to an unlimited data plan. Consumer surveys and studies also indicate that Internet users are not clear about the details of their Internet plans and data use. FCC reported in 2010 that 80 percent of broadband users did not know their home connection speed. A June 2013 Canadian Consumer Union study on mobile consumer data use concluded that the majority of respondents did not know and were not able to easily calculate their data usage, and more than a third of respondents did not know how quickly their usage limits can be reached. Although participants in our focus groups were widely subject to UBP for their mobile Internet, when asked what factors were important when considering a mobile data plan, participants expressed a preference for unlimited mobile data access in seven of our eight focus groups. Across all eight focus groups, we found a mixture of participants who reported having mobile Internet service that included data allowances and those who identified their mobile data plan as having unlimited data. In three of the eight groups, we found participants who have held onto “grandfathered” unlimited plans by not upgrading their devices, potentially trading the loss of improved technology—both newer devices and faster networks that older devices may not be able to access—for unlimited data. Focus group participants expressed few broad concerns about mobile UBP; more focused concerns included avoiding overage fees and managing data plans shared by more than one person. For example, some participants discussed the difficulty of monitoring or controlling the data usage of family members—such as a child—with a device on a shared data account given the potential for data overages. Participants demonstrated that they have learned to adjust to mobile data allowances and throttling. They discussed the following strategies for limiting the amount of mobile data they use and otherwise adapting to mobile UBP, including: limiting use of data-heavy applications such as video; connecting devices to Wi-Fi at home and other places to avoid mobile data usage; and changing to data plans or providers that better meet their needs. (For example, we spoke with participants who reported having changed their data plans to increase their data allowance while others, after realizing that they subscribed to a data plan that had more data than needed, have changed their plans to reduce their data allowance as well as their price for service.) Participants also said that they were aware of the tools that their mobile provider offers them, including online tools to estimate data usage and text alerts when they near their data allowance. In all eight groups we found participants who track their data use through tools from their provider, such as online data meters and information on their bills, and in seven groups, found participants who experienced receiving alerts when their usage approached their data allowance. Participants noted that such tools were helpful in choosing a plan or avoiding overage charges. The more limited use of UBP by fixed providers was evident in our focus groups. In three of the eight groups, participants reported experience with fixed Internet UBP and in seven groups there were participants who said they were unaware that some fixed providers have implemented such plans. While we found that some participants in each group voiced positive reactions to the concept of fixed Internet UBP, there was more discussion regarding negative reactions and concerns. In expressing positive reactions toward the concept of fixed Internet UBP, participants noted the potential benefit of more pricing options. Other reactions included liking the idea of paying less money for less data or stating that it was fairer to pay only for data used. Others noted that they should pay less for their access than people who use a lot of Internet data. However, participants in all eight groups expressed strong negative reactions to the concept of fixed Internet UBP, and these discussions overshadowed discussions about the potential benefits of UBP. Common issues discussed at our groups included: Concern of the potential effects of data allowances given the importance of the Internet in their lives. Participants cited the importance of the Internet for commerce, education, and employment and expressed concern that UBP could limit their access to the Internet. Concern that providers would use UBP as a way of increasing the amount they charge everyone for Internet service, in part because in their view consumers—having become accustomed to unlimited data and reliant on Internet access—would have no choice but to pay more. Some were skeptical that UBP would be used to reduce prices for any customers. Fixed internet UBP negatively affecting certain populations, such as students and telecommuters who may use a lot of data at home and those with lower-socio-economic status who may have difficulty affording data plans with sufficient data allowance. In addition, in all eight groups there were participants who said they watch streaming video as a substitute for television, a group of consumers who may be more likely than others to be affected by UBP. Having to worry about monitoring and potentially limiting data usage at home if subject to UBP, given that participants were used to the freedom of unlimited home access. In six groups, at least one participant said that they would accept UBP if it was only used by providers to offer discounts on lower-data plans while unlimited data plans remained the standard. While, as discussed earlier, participants have found ways to adapt to mobile UBP, such adoption may not be so easy for fixed access. For example, fixed Internet consumers may be less willing to reduce their use of streaming video or other data-heavy applications at home. And while participants said they connect their mobile devices to Wi-Fi, there is no similar option for avoiding in-home data usage; while we heard some comments where participants said they might leave their house to use the Internet—for example to connect to free Wi-Fi at coffee shops—that may not be possible for all in-home data uses such as teleworking and education. One economic rationale for UBP is to address situations where users take into account their own costs and benefits from Internet access, but ignore other costs—such as congestion—that they impose on other users— referred to as “externalities.” In the presence of such externalities, economists often propose that consumers should be charged prices that reflect both of these types of costs in order to ensure that the consumers use the resource efficiently. All four mobile providers told us they use UBP to address the usage of heaviest users, manage their networks, or address congestion. All but one fixed providers we reviewed that enforce UBP said that they use UBP to address the usage of the heaviest data users. Most fixed providers said that their networks do not face widespread congestion. Most fixed providers we interviewed that use UBP said that they have set their allowances so that they currently affect only at most the three to five percent of users that use the most data, not average users. In addition, two fixed providers we interviewed told us that they are continually upgrading and expanding their networks to meet demand and UBP can be used to ensure that heavier users contribute more to those costs than lighter users. However, some industry stakeholders we interviewed said that UBP may not be warranted to address the data usage of the heaviest users. Two public interest organizations claimed that because the marginal costs of data delivery are very low and falling, heavier users impose few additional costs to providers than lighter users do. According to FCC, however, prices set on marginal costs would generally not allow providers to recover their costs. In addition, according to one paper we reviewed, absent network congestion, one person’s use of the Internet does not interfere with other users, meaning that there is not a need to limit activity.UBP approaches, such as peak pricing during congested times, could address predictable congestion when it exists and place less burden on consumers overall. However, according to FCC, there are problems with peak pricing as it may not be an efficient means, or even allow, for ISPs to recover their costs of providing Internet access. Officials with those organizations added that more targeted Finally, to the extent that fixed Internet UBP is used to address the usage of the heaviest users, the number of customers affected could grow over time. According to Sandvine—a provider of networking solutions—fixed Internet customers who appear to use the Internet to replace traditional subscription-television service already use an average of 212 GB a month, close to many existing data allowances. Furthermore, according to FCC, the top 15 percent of cable Internet subscribers use over 145 GB of data a month and fiber-optic Internet subscribers use over 120 GB per month. Based on Cisco’s estimate of data growth, noted previously, of 21.5 percent per year for fixed Internet data, their data usage could reach more than double common current data allowances of 150 or 300 GB by 2020 (see fig. 5). In addition, more users could be affected by UBP in the future to the extent that average users begin using more data-heavy applications and content providers continue to develop more data-intensive content and applications. A second rationale for UBP is that it has the potential to increase consumer welfare. According to economics literature, UBP can be interpreted as a form of price discrimination, where sellers offer the same or similar goods at different prices and consumers choose among these versions. In a competitive market place, this practice may enhance consumer welfare because firms may compete to offer different versions of a product at competitively low prices. As a result, in a competitive marketplace, providers could use UBP to innovate on the types of plans they offer. For example, this scenario could allow low-data users to buy plans with low data allowances at competitively lower prices, and heavier data users could buy plans with higher data allowances that best meet their needs. In contrast, in markets that are not very competitive, this kind of price discrimination may not be beneficial because limited competition gives the seller greater ability to make take-it-or-leave-it offers to consumers—who may face few choices to move to other providers—that may enhance providers’ profits at the expense of consumer welfare. FCC officials said that they believe economics literature is inconclusive on these matters. There is generally more competition among mobile providers than among fixed providers. According to FCC, 54 percent of households are in census tracts that have more than two fixed providers with subscribers to services with a download speed of at least 6 Mb/second and upload speed of at least 1.5 Mb/s. By contrast, according to FCC, almost 98 percent of the U.S. population lives in census blocks with more than two mobile providers. Participants in six of our eight focus groups said they would look to switch providers if faced with fixed Internet UBP, but participants in all eight groups said that they faced limited choice in providers, which may limit their ability to select a data plan that best meets their needs. All mobile providers and two fixed providers we interviewed told us that they in part use UBP to offer more Internet plan options to meet individual needs; as a result, providers don’t require all consumers to pay for unlimited data or high-speed data at a fixed price. Consequently, consumers who want to use low amounts of data can pay less than those who use more. For example, one mobile provider offers a 1 GB data plan for $40 a month compared to $80 for a 6 GB data plan. The availability of low-price, low-data plans may also encourage some individuals or households without Internet access to subscribe. However, according to a recent survey by Pew Research, only 9 percent of non-Internet users cited price as a reason for not accessing the Internet, suggesting that the availability of low priced plans may not substantially increase the number of new Internet subscribers. The extent to which mobile and fixed Internet customers have benefitted from low-cost low-data plans is unclear at this time. While mobile customers can select a wide range of data plans, customers who may potentially benefit from the reduced prices of low- data plans may not be taking advantage of them. For example, one mobile provider we interviewed said that a “small percentage” of its Yet, according to customers are on 500 MB or smaller data plans.Cisco, about 25 percent of mobile customers use less than 200 MB of data a month. Fixed internet customers—who as mentioned generally face less choice in providers than mobile customers do—generally have fewer plan options than mobile customers, especially for low-cost, low-data plans. However, it appears that a small percentage of customers subscribe to such options even though, according to FCC data, 30 percent of cable Internet customers use about 25 GB of data a month or less, meaning they could potentially take advantage of such plans. Another reason why subscribers may not choose the low-data plan at a discount is because the cost per GB of data is high and the small discount (of at most 20 percent of the unlimited data plan price) may not be worth the significant restriction in data usage from unlimited data. Further, only two fixed providers we interviewed offer data plans near or below the levels of median household data usage reported by FCC. FCC’s Open Internet Advisory Committee studied the issue of usage- based pricing and, believing that more fixed Internet customers could be affected by UBP in the future, recommended in August 2013 that FCC monitor fixed provider application of UBP. data—such as on plan prices and data allowances used by selected fixed While FCC officials said that they will consider how they can providers.use these data in the future, at this time FCC only uses these data to set a benchmark to ensure that providers receiving universal service funding are providing broadband services in rural areas at prices comparable to those in urban areas. FCC does not track providers’ use of UBP as FCC only recently started collecting data for the specific purpose mentioned above. For example, FCC is not analyzing the data to determine how many providers utilize UBP, levels of data allowances, and how those allowances compare to average data consumption. As mentioned earlier, the Telecommunications Act of 1996 calls on FCC to promote the public interest in the provision of telecommunications services. Because FCC is not conducting any broader analysis with these data, it may not have a full understanding of how UBP is being used and its effects on consumers. This lack of understanding may limit FCC’s ability to act to protect the public interest if necessary. FCC, Open Internet Advisory Committee, FCC Open Internet Advisory Committee 2013 Annual Report (August 20, 2013). Two industry stakeholders we interviewed also suggested that fixed providers—many of whom also provide television video content—could use UBP as a means to raise the price for watching online streaming video services—a competitor to their video services—as households continue to substitute television with streaming video. Because UBP can make it more expensive to watch data-heavy content such as streaming video, it may discourage people from accessing such content and, therefore, discouraging them from eliminating their television service. This might adversely affect firms that provide online video streaming services and reduce competition and innovation in the market for providing streaming video content, thereby negatively affecting consumers. In addition, two industry stakeholders we interviewed believe UBP could in general inhibit innovation that results from experimentation and unlimited access to the Internet. Greater innovation could result in the development of more content and applications that consumers demand and value. Some Internet users, such as heavy data users, may pay more for access under UBP. As a result, some of them may limit their Internet use—as mentioned earlier, some focus group participants said that they have reduced their mobile data usage as a result of UBP—particularly of data-heavy content and applications such as online learning and video. This could lead to reduced use of some beneficial Internet applications and innovation in such applications. For example, one public interest group said that the limits that UBP may impose on the market for innovative applications and content may limit the potential of new startups. One of the mobile providers we reviewed has entered into agreements with selected content providers so that the content provider’s data does not count toward customers’ data allowances. When presented with such a hypothetical situation, while some focus group participants expressed confusion over how such an arrangement might work, we found participants in all eight groups who agreed that they would be more likely to access content that does not count toward their data limits than content that does. Two industry stakeholders we interviewed suggested that such agreements would favor large established companies and reduce innovation or competition. Furthermore, UBP could have negative effects on network security. According to a 2012 study, UBP may result in consumers—in an attempt to reduce data usage—foregoing automatic security updates to their computers, which could have negative implications for network security. Consumers who are subject to UBP need to understand their data usage and needs in order to make an informed decision about what is best for them. The confusion over data usage exhibited by consumers in our focus groups, as well as inconsistencies in estimates on the data certain applications use, and the fact that “hidden” data usage can result in consumers’ using more data than expected, can make it difficult for consumers to make informed decisions regarding their data plan and could result in consumers exceeding their data allowance and facing overage charges. Mobile customers benefit from a voluntary code of conduct that FCC helped facilitate to guide provider actions and consumer education. Such a code helps enable more consistent and transparent information provided to consumers. Because no code of conduct exists for fixed providers, there is less assurance that information provided to consumers is clear, consistent, and transparent, potentially leading to consumer confusion over data usage and poor decisions regarding data plans. Although few fixed Internet customers are affected by UBP at this time, the number could grow to the extent that fixed Internet providers increase their use of UBP and data use grows. Providers could implement UBP in a way that benefits consumers—for example, by offering low-data, low- cost plans for customers who do not want to pay for an unlimited data plan they do not need. However, providers—especially those facing limited competition—could use UBP as a means to increase their profits which could result in UBP having negative effects, including increased prices paid by consumers, reductions in content and applications accessed by consumers, and increased threats to network security. Because fixed Internet customers generally have limited choice of providers, they may be unable to switch providers to select a data plan that best meets their needs. While focus group participants have adapted to mobile data plans, they and other consumers may have a harder time adapting to fixed-Internet UBP given the limited choice among providers and challenges in reducing data usage—as consumers have done for mobile Internet—at home if needed. While FCC has been collecting relevant data on the use of UBP, including information on data allowances, FCC is not using the data to gain an understanding of how UBP is being used and what its potential effects are on consumers. Without this market knowledge, FCC would not necessarily be able to take appropriate action if UBP is being used in a way that is harmful to consumers. To ensure that application of UBP for fixed Internet access does not conflict with the public interest, we recommend that the FCC: 1. Collaborate with fixed Internet providers to develop a voluntary code of conduct, similar to the Wireless Code of Conduct, to improve communication and understanding of data use and pricing by Internet consumers. 2. Make use of existing data collection sources to track fixed-Internet UBP implementation and its effects on consumers nationwide so that FCC can take actions, if necessary, to protect consumer interests. We provided a draft of this report to FCC for review and comment. FCC provided a written response (see app. II) as well as technical comments that we incorporated as appropriate. In response to our recommendation that FCC collaborate with fixed Internet providers to develop a voluntary code of conduct for consumer communication, FCC said that because the number of consumer complaints regarding UBP by fixed providers appears to be small and that UBP plans are less common for fixed Internet customers than mobile customers, it is unclear that any action is needed at this time. FCC added it will continue to monitor its complaints and provider offerings for trends that might indicate that more action is needed. We recognize that UBP plans are less common for fixed Internet customers than mobile, but believe additional action is warranted as we recommended. Given the trend toward greater use of UBP by fixed providers, increased data usage, confusion by consumers regarding data usage, our previous findings that consumers may not know to file complaints with FCC, and the potential that limited competition among fixed providers could result in their using UBP in ways that harm consumers, we continue to believe that it is important for FCC to be more proactive. For mobile Internet, FCC worked with providers to improve consumer education regarding data through the Consumer Code for Wireless Service only after many consumers faced significant problems with their data usage. We believe FCC has an opportunity to protect consumers before significant problems occur by collaborating with fixed Internet providers now to develop a code of conduct. Secondly, in response to our recommendation that FCC use existing data sources to track UBP implementation, FCC agreed to do so but noted that its two data gathering efforts that collect some data relevant to UBP—the Urban Rates survey and Form 481—were both designed for other purposes. FCC added that as a result, while these efforts would allow it to conduct some analyses regarding UBP—such as plans offered by providers and the terms of those plans—it would not be able to conduct certain analyses, such as the number of subscribers subject to UBP. We agree that FCC should analyze the existing data as best as it can, taking into account such limitations. However, we also believe that in conducting such analyses, FCC, to the extent possible should consider using other existing data as well—such as data on median data consumption—to make the analyses as meaningful as possible. We are sending copies of this report to interested congressional committees and the Chairman of the FCC. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in appendix III. To review what information is available about the application of usage- based pricing by Internet service providers, we reviewed information on the current consumer Internet data plans of the largest 13 fixed and 4 mobile providers in order to cover 98 percent of each market. To determine market shares among fixed providers, we used data from the first quarter of 2014 on subscribership levels reported by Leichtman Research Group, a private research firm specializing in the telecommunications industry. To determine mobile provider market shares, we used data on 2012 subscribership numbers by provider as reported by the Federal Communications Commission (FCC) in its 16th Mobile Competition Report. We collected plan information on each provider’s public website, validated it with each provider during an interview, and then confirmed the information with each provider again in October 2014. Information we collected included plan terms and conditions relevant to UBP as applicable, such as data allowances, connection speeds, and overage charges or other consequences for customers exceeding their allowances. Information on Internet plans, including data allowances, was valid as of October 2014, but could change at any time. We also interviewed these providers about the extent to which they use UBP and the specifics of their data plans. Finally, we interviewed the FCC regarding these issues, as well as FCC’s role regarding UBP. To review issues related to usage-based pricing that selected consumers report are important to them, we contracted with a private market research firm to assist with screening, recruiting, and holding focus groups with Internet consumers. We held two focus groups in each of the following cities: Baltimore, MD; Des Moines, IA; Las Vegas, NV; and New York, NY. The cities were selected to ensure diversity in geographic location and population of metropolitan areas. In each city, we held one group with self-identified “light” users and one group with self-identified “moderate” and “heavy” users, based on definitions from FCC. Potential participants were screened to ensure that they had both mobile Internet data plans as well as in-home broadband service. We also recruited participants in order to ensure a mix of age, race, sex, education level, and income level. Each of the eight groups contained 9 to 10 participants for a total of 77. Focus group discussions were structured and guided by a moderator who used a standardized list of questions to encourage participants to share their thoughts and experiences with UBP. Specifically, question topics included participants’ use of fixed and mobile Internet, their experiences with fixed and mobile UBP, and their opinions of and experiences with fixed and mobile UBP. To ensure participants understood the terminology used in the group discussions, we began each session by presenting copies of figures 1 and 2 in this report and clarifying the differences between fixed and mobile Internet to the participants. During the sessions, we assured participants of the anonymity of their responses, promising that their names would not be used. We also conducted one pretest focus group at GAO and revised the moderator’s guide prior to beginning our focus groups sessions. Each of the eight focus groups was transcribed, which served as the record for each group. Those transcripts were then evaluated using content analysis to develop our findings. Our analysis focused on categorizing the common themes and statements made across the focus groups and quantifying those categories to gain an understanding of the predominant viewpoints expressed by the participants. The analysis was conducted in two steps. In the first step, two analysts independently developed a codebook and then worked together to resolve any discrepancies. In the second step, an analyst coded each transcript and then a second analyst verified those codes. Any coding discrepancies were resolved by both analysts agreeing on what the codes should be. Methodologically, focus groups are not designed to (1) demonstrate the extent of a problem or to generalize results to a larger population, (2) develop a consensus to arrive at an agreed-upon plan or make decisions about what actions to take, or (3) provide statistically representative samples or reliable quantitative estimates. Instead, focus groups are intended to generate in-depth information about the reasons for the focus group participants’ attitudes on specific topics and to offer insights into their concerns about and support for an issue. The projectability of the information produced by our focus groups is limited for several reasons. First, the information includes only the discussions from the Internet consumers in the eight groups. Second, while the composition of the groups was designed to ensure a range of age and education levels, the groups were not randomly sampled. Third, participants were asked questions about their experiences or opinions, and other Internet consumers not in the focus groups may have had other experiences or opinions. In order to review the potential effects of UBP on consumers, we completed a literature search to obtain and review documentation, research papers and studies, and articles related to the use of UBP and its potential effects. Our search looked for relevant work, including economic literature, articles in scholarly journals, and industry publications, published in the past 5 years and used search terms including “data cap,” “throttling,” and “usage-based pricing.” We reviewed these papers and their methodologies to determine their reliability. We interviewed the 4 mobile and 13 fixed providers mentioned above regarding the potential effects of UBP on consumers. We also interviewed FCC as well as industry stakeholders including academics and researchers, public interest organizations, industry associations, and others. We determined industry stakeholders to interview based on our review of published literature and studies, as well as based on recommendations from providers and other organizations we interviewed. We conducted this performance audit from November 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Mark L. Goldstein, (202) 512-2834 or goldsteinm@gao.gov. In addition to the contact above, Keith Cunningham (Assistant Director); Eli Albagli; Namita Bhatia-Sabharwal; Melissa Bodeau; Matthew Cook; Joshua Ormond; Cheryl Peterson; Matthew Rosenberg; Hai Tran; and Elizabeth Wood made key contributions to this report.
Access to broadband Internet is seen as being crucial to improving access to information, quality of life, and economic growth. In recent years, some Internet providers have moved away from unlimited data plans to UBP with uncertain effects on consumers. GAO was asked to review the use of UBP by Internet providers. This report examines: (1) information available about the application of UBP by Internet service providers; (2) issues related to UBP selected consumers report are important to them; and (3) the potential effects of UBP on consumers. GAO collected data on Internet plans from the country's 13 top fixed and 4 top mobile providers; contracted with a market research firm to assist with conducting eight focus groups held with consumers in four cities selected to reflect geographic diversity; reviewed relevant studies; and interviewed officials from the top Internet providers, FCC and industry stakeholders, including researchers, policy, and industry organizations. Based on an analysis of consumer data plans of the top 13 fixed—in home—and 4 mobile Internet providers, GAO found that mobile providers employ usage-based pricing (UBP) more commonly than fixed. Under UBP, providers can charge varying prices, change connection speeds, or take other actions based on Internet data consumed. The 4 largest mobile providers in the country all use UBP to some extent; 7 of the 13 largest fixed providers now use UBP to some extent. Because prices can vary based on usage, it may be important that consumers be informed about data. GAO found that some tools offered by fixed providers to educate consumers regarding data can be confusing. For example, some provider estimates vary on data consumed for the same type of content. While mobile providers follow a voluntary code of conduct, developed with the Federal Communications Commission (FCC), to encourage useful, consistent consumer education, no similar code exists among fixed providers potentially resulting in confusion and a lack of consumer awareness regarding data needs. Participants in all eight of GAO's focus groups reported being subject to mobile UBP and expressed some concerns about it, such as difficulty tracking data usage among many devices. Yet participants accepted mobile UBP and adapted by, for example, limiting use of high-data content and by connecting to Wi-Fi. By contrast, only a few participants in three focus groups reported being subject to fixed Internet UBP. Participants expressed concerns about possible increases in prices for access caused by fixed-Internet UBP and the potential effect of limits on their fixed Internet, where they have not considered data usage. Participants exhibited confusion over data consumption—for example thinking that low-data activities like online shopping consumed large amounts of data. Participants also expressed concern about difficulty tracking the wide range of devices accessing their fixed data allowance and that fixed UBP may negatively affect students, people working from home, and those with lower socio-economic status. The potential effects of UBP are uncertain and could depend on competition among providers. Based on economics literature, UBP can address the usage of the heaviest data users and can benefit consumers by providing more options as opposed to a one-size-fits-all unlimited data plan. The literature also suggests that providers could implement UBP to benefit consumers—for example, by offering low-data, low-cost plans for those who do not want an unlimited data plan. While mobile providers GAO reviewed offer such plans, fixed providers—generally facing less competition—do so to a lesser extent. According to the literature, providers facing limited competition could use UBP to increase profits, potentially resulting in negative effects, including increased prices, reductions in content accessed, and increased threats to network security. Several researchers and stakeholders GAO interviewed said that UBP could reduce innovation for applications and content if consumers ration their data. While FCC is collecting data regarding fixed UBP, it is not using this data to track UBP use because it only recently started collecting the data specifically to analyze prices. As a result, although FCC is charged with promoting the public interest, it may not know if UBP is being used in a way that is contrary to the public interest and, if so, take appropriate actions.
At the end of fiscal year 2008, the number of civilian and military personnel in DOD’s acquisition workforce totaled nearly 126,000—of which civilian personnel comprised 88 percent. DOD defines its acquisition workforce to include 13 career fields, based on the Defense Acquisition Workforce Improvement Act of 1990. From fiscal years 2001 to 2008, the number of civilian and military acquisition personnel in these 13 fields declined overall by 2.6 percent; however, some career fields have increased substantially—test and evaluation—while others have shown dramatic declines—business, cost estimating and financial management.. See appendix I for the number of military and civilian personnel in each of the acquisition career fields in fiscal years 2001 and 2008, and the percentage change between those years. Our prior work has shown that DOD has relied heavily on contractor personnel to augment its in-house workforce. In March 2008, we reported that in 15 of the 21 offices we reviewed, contractor personnel outnumbered DOD personnel and comprised as much as 88 percent of the workforce. In the other 6 offices, contractor personnel comprised between 19 and 46 percent of the workforce. Although this review did not focus on the acquisition workforce, many of the 21 offices had acquisition responsibilities. While use of contractors provides the government certain benefits, such as increased flexibility in fulfilling immediate needs, we and others have raised concerns about the federal government’s services contracting, in particular for professional and management support services, including acquisition support services. In March 2008, we noted concern about one DOD component’s hiring contractor personnel in reaction to a shortfall in the government workforce rather than as a planned strategy to help achieve its mission. In our case study, we found that one Army component was paying between 17 and 27 percent more on average for contractor personnel working as contract specialists than for its government employees who were doing equivalent work. In addition to the risk of paying more than necessary for the work that it needs, is the risk of loss of government control over and accountability for mission-related policy and program decisions when contractors provide services that closely support inherently governmental functions, which require discretion in applying government authority or value judgments in making decisions for the government. The closer contractor services come to supporting inherently governmental functions, the greater the risk of their influencing the government’s control over and accountability for decisions that may be based, in part, on contractor work. Other concerns about using contractor personnel include improper use of personal services contracts and the increased potential for conflicts of interest, both organizational and personal. Numerous components in DOD share policy and guidance responsibility for the workforce. Among the components, the Office of the Under Secretary for Acquisition, Technology and Logistics (AT&L), is responsible for managing DOD’s acquisition workforce, including tailoring policies and guidance specific to the acquisition workforce and managing the training and certification of that workforce. In addition, each military service has its own corresponding acquisition offices that develop additional service- specific guidance, and provide management and oversight of its workforce. Within each service, the program offices identify acquisition workforce needs, make decisions regarding the composition of the workforce (the mix of civilian, military, and contractor personnel), and provide the day-to-day management of the workforce. DOD lacks critical departmentwide information on the use and skill sets of contractor personnel performing acquisition-related functions. While DOD planning documents state that the workforce should be managed from a “total force” perspective—which calls for contractor personnel to be managed along with civilian and military personnel —DOD has only recently collected departmentwide data on contractor personnel performing acquisition-related functions. According to an AT&L official, DOD’s baseline count shows that 52,000 contractor personnel are supporting the acquisition workforce. As such, contractor personnel comprise about 29 percent of DOD’s total acquisition workforce. The AT&L official noted that the contractor personnel tracking system is still under development. Data we obtained from 66 program offices show that contractor personnel comprised more than a third of those programs’ acquisition-related positions. Table 1 shows the data on contractor personnel reported by the 66 program offices (see appendix II for more detailed information). DOD also lacks information on factors driving program offices’ decisions to use contractor personnel rather than hire in-house personnel. DOD guidance for determining the workforce mix outlines the basis on which officials should make decisions regarding what type of personnel— military, civilian, or contractor—should fill a given position. The guidance’s primary emphasis is on whether the work is considered to be an inherently governmental function, not on whether it is a function that is needed to ensure institutional capacity. The guidance also states that using the least costly alternative should be an important factor when determining the workforce mix. However, when we asked program offices about their reasons for using contractor rather than civilian personnel, we found that cost was cited by only 1 program office. The 30 program offices, which provided reasons for using contractor personnel, cited the following key factors: 22 cited a shortage of civilian personnel with a particular expertise, 18 cited staffing limits on civilian personnel, 17 cited that the particular expertise sought is generally not hired by the government, 15 cited the ease or speed of bringing on contractor personnel, 9 cited having a short-term requirement, 8 cited funding not being available for civilian personnel, and 1 cited the cost of contractor personnel being less than civilian personnel (See appendix III for information on the number of program offices reporting the reasons for using contractor personnel by component.) In comparison with DOD’s practices, we found that leading organizations maintain and analyze data on their contractor personnel in order to mitigate risks, ensure compliance with in-house regulations and security requirements, and ensure that reliance on contractor personnel creates value for the company. We also found that leading organizations take a business-oriented approach to determining when to use contractor support. For example, some companies generally use contractor personnel to facilitate flexibility and meet peak work demands without hiring additional, permanent, full-time employees. Some also place limits on their use of contractor personnel, such as limiting the use of contractor personnel to temporary support, to 1 year of operations, or to functions that are not considered as core pieces of the company’s main business. AT&L lacks key pieces of information that hinder its ability to determine gaps in the number and skill sets of acquisition personnel needed to meet DOD’s current and future missions. At a fundamental level, workforce gaps are determined by comparing the number and skill sets of the personnel that an organization has with what it needs. However, AT&L lacks information on both what it has and what it needs. Not having this information in its assessments not only skews analyses of workforce gaps, but also limits DOD’s ability to make informed workforce allocation decisions. With regard to information on the personnel it has, AT&L lacks complete information on the skill sets of the current acquisition workforce— including contractor personnel—and whether these skill sets are sufficient to accomplish its missions. AT&L is currently conducting a competency assessment to identify the skill sets of its current in-house acquisition workforce. While this assessment will provide useful information regarding the skill sets of the current in-house acquisition workforce, it is not designed to determine the size, composition, and skill sets of an acquisition workforce needed to meet the department’s missions. AT&L also lacks complete information on the acquisition workforce needed to meet DOD’s mission. The personnel numbers that AT&L uses to reflect needs are derived from the budget. Because these personnel numbers are constrained by the size of the budget, they likely do not reflect the full needs of acquisition programs. Of the 66 program offices that provided data to us, 13 reported that their authorized personnel levels were lower than those they requested. In comparison with DOD’s practices, we found that leading organizations identify gaps in the workforce by assessing the competencies of its workforce and comparing those with the overall competencies the organization needs to achieve its objectives. An official from one company noted that such an assessment indicated that the company needed skill sets different from those it needed in the past, because the work in one of its lines of service had increased. AT&L has begun to respond to recent legislative requirements aimed at improving DOD’s management and oversight of its acquisition workforce, including developing data, tools, and processes to more fully assess and monitor its acquisition workforce. Some of AT&L’s recent initiatives include: Drafting an addendum to the Implementation Report for the DOD Civilian Human Capital Strategic Plan 2006-2010 that will lay out AT&L’s vision and key initiatives for managing and overseeing the acquisition workforce and an analysis of the status of the acquisition workforce. Implementing the Acquisition Workforce Development Fund, with efforts focused in three key areas: (1) recruiting and hiring, (2) training and development, and (3) retention and recognition. The largest proportion of the fund is currently slated for recruiting and hiring. Developing a competency assessment for the acquisition workforce, which is scheduled to be completed in March 2010. Establishing the Defense Acquisition Workforce Joint Assessment Team. According to an AT&L official, the team will now focus its efforts on identifying, tracking, and reporting information on contractor personnel supporting the acquisition workforce—including developing a common definition to be used across the department. The Secretary of Defense recently announced that efforts will begin in fiscal year 2010 to increase the size of the acquisition workforce by converting 11,000 contractor personnel to government positions and hiring an additional 9,000 government personnel by 2015. According to an AT&L official, AT&L is working with the components to develop the plans for these efforts. In addition to these acquisition workforce initiatives, another DOD initiatives aimed at improving the broader workforce may have the potential to enhance AT&L’s efforts to obtain information on the skill sets of contractor personnel supporting the acquisition workforce. Specifically, DOD, through its components, is developing an annual inventory of contracts for services. The inventory is required to include, among other things, information identifying the missions and functions performed by contractors, the number of full-time contractor personnel equivalents, and the funding source for the contracted work. The Army issued its first inventory for fiscal year 2007. This initial inventory, however, does not include information on the skill sets of contractor personnel and the functions they perform. Inventories for all DOD components are not scheduled to be completed before June 2011. Although these efforts are promising, it is too early to determine the extent to which these efforts will improve the department’s management and oversight. Moreover, these efforts may not provide the comprehensive information DOD needs to manage and oversee its acquisition workforce. DOD faces significant challenges in assessing and overseeing its acquisition workforce to ensure that it has the capacity to acquire needed goods and services and monitor the work of contractors. While DOD’s recent and planned actions could help address many of these challenges, the department has yet to determine the acquisition workforce that it needs or develop comprehensive information about contractor personnel—including the skill sets, functions performed, or length of time for which they are used. In addition, without guidance on the appropriate circumstances under which contractor personnel may perform acquisition work, DOD runs the risk of not maintaining sufficient institutional capacity to perform its missions, paying more than necessary for the work that it needs, or losing control over and accountability for mission-related policy and program decisions. Until DOD maintains more comprehensive information on its contractor personnel, it will continue to have insufficient information regarding the range of skills and the functions performed by this key component of the acquisition workforce. Without having this information on a departmentwide basis, DOD runs the risk of not having the right number and appropriate mix of civilian, military, and contractor personnel it needs to accomplish its missions. In our report released March 25, 2009, we made several recommendations to the Secretary of Defense to better ensure that DOD’s acquisition workforce is the right size with the right skills and that the department is making the best use of its resources. We recommended that the Secretary: Collect and track data on contractor personnel who supplement the acquisition workforce—including their functions performed, skill sets, and length of service—and conduct analyses using these data to inform acquisition workforce decisions. Identify and update on an ongoing basis the number and skill sets of the total acquisition workforce that the department needs to fulfill its mission. Review and revise the criteria and guidance for using contractor personnel to clarify under what circumstances and the extent to which it is appropriate to use contractor personnel to perform acquisition- related functions. Develop a tracking mechanism to collect information on the reasons contractor personnel are being used so that DOD can determine whether the guidance has been appropriately implemented across the department. We are pleased that DOD has implemented part of the first recommendation by collecting departmentwide data on the number of contractor personnel that support the acquisition workforce. We are encouraged by DOD generally concurring with the rest of our recommendations, although the department noted that collecting information on the skill sets and length of service of contractor personnel needed to be carefully considered. We agree that the manner in which data on contractor personnel are to be collected should continue to be carefully considered. Nevertheless, we continue to believe that comprehensive data on contractor personnel are needed to accurately identify the department’s acquisition workforce gaps and inform its decisions on the appropriate mix of in-house or contractor personnel. As DOD moves forward with it’s recently announced plans to increase the size of the acquisition workforce over the next few years, having comprehensive information about the acquisition workforce it both has and needs will become even more vital to ensure the department makes the most effective workforce decisions. Mister Chairman, this concludes my prepared remarks. I would be happy to answer any questions you or other Members of the Subcommittee may have at this time. Military, Civilian, and Contractor Personnel in Acquisition-Related Functions by Service as Reported by Selected Program Offices in 2008 Air Force (19 program offices) offices) offices) Joint services (9 program offices) offices) offices) Business (includes auditing, business, cost estimating, financial management, property management, and purchasing) Engineering and Technical (includes systems planning, research, development and engineering; lifecycle logistics; test and evaluation; production, quality and manufacturing; and facilities engineering) FFRDC personnel work in Federally Funded Research and Development Centers. Number of Program Offices Reporting Reason for Using Contracor Personnel as Reported by Selected Program Offices in 2008 Navy and Marine Corps (5 program offices) offices) offices) Joint services (4 program offices) offices) John K. Needham, (202) 512-5274 or needhamjk1@gao.gov In addition to the contact named above, Carol Dawn Petersen, Assistant Director, and Ruth “Eli” DeVan, Analyst-in-Charge, made key contributions to this report. Department of Defense: Additional Actions and Data Are Needed to Effectively Manage and Oversee DOD’s Acquisition Workforce. GAO-09-342. Washington, D.C.: March 25, 2009. Human Capital: Opportunities Exist to Build on Recent Progress to Strengthen DOD’s Civilian Human Capital Strategic Plan. GAO-09-235. Washington, D.C.: February 10, 2009. High Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Department of Homeland Security: A Strategic Approach Is Needed to Better Ensure the Acquisition Workforce Can Meet Mission Needs. GAO-09-30. Washington, D.C.: November 19, 2008. Human Capital: Transforming Federal Recruiting and Hiring Efforts. GAO-08-762T. Washington, D.C.: May 8, 2008. Defense Contracting: Army Case Study Delineates Concerns with Use of Contractors as Contract Specialists. GAO-08-360. Washington, D.C.: March 26, 2008. Defense Management: DOD Needs to Reexamine Its Extensive Reliance on Contractors and Continue to Improve Management and Oversight. GAO-08-572T. Washington, D.C.: March 11, 2008. Defense Contracting: Additional Personal Conflict of Interest Safeguards Needed for Certain DOD Contractor Employees. GAO-08-169. Washington, D.C.: March 7, 2008. Federal Acquisition: Oversight Plan Needed to Help Implement Acquisition Advisory Panel’s Recommendations. GAO-08-515T. Washington, D.C.: February 27, 2008. The Department of Defense’s Civilian Human Capital Strategic Plan Does Not Meet Most Statutory Requirements. GAO-08-439R. Washington, D.C.: February 6, 2008. Defense Acquisitions: DOD’s Increased Reliance on Service Contractors Exacerbates Long-standing Challenges. GAO-08-621T. Washington, D.C.: January 23, 2008. Department of Homeland Security: Improved Assessment and Oversight Needed to Manage Risk of Contracting for Selected Services. GAO-07-990. Washington, D.C.: September 17, 2007. Federal Acquisitions and Contracting: Systemic Challenges Need Attention. GAO-07-1098T. Washington, D.C.: July 17, 2007. Defense Acquisitions: Improved Management and Oversight Needed to Better Control DOD’s Acquisition of Services. GAO-07-832T. Washington, D.C.: May 10, 2007. Highlights of a GAO Forum: Federal Acquisition Challenges and Opportunities in the 21st Century. GAO-07-45SP. Washington, D.C.: October 2006. Framework for Assessing the Acquisition Function At Federal Agencies. GAO-05-218G. Washington, D.C.: September 2005. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 2001, Department of Defense's (DOD) spending on goods and services has more than doubled to $388 billion in 2008, while the number of civilian and military acquisition personnel has remained relatively stable. To supplement its in-house workforce, DOD relies heavily on contractor personnel. If it does not maintain an adequate workforce, DOD places its billion-dollar acquisitions at an increased risk of poor outcomes and vulnerability to fraud, waste, and abuse. This testimony is based on GAO's March 2009 report and addresses DOD's efforts to assess the sufficiency of the total acquisition workforce and to improve its management and oversight of that workforce. It also discusses selected practices of leading organizations that may provide DOD with insights for its efforts. Although contractor personnel are a key segment of its total acquisition workforce, DOD lacks critical departmentwide information on the use and skill sets of these personnel. DOD also lacks information on why contractor personnel are used, which limits its ability to determine whether decisions to use contractors to supplement the in-house acquisition workforce are appropriate. GAO found that program office decisions to use contractor personnel are often driven by factors such as quicker hiring time frames and civilian staffing limits, rather than by the nature or criticality of the work. In comparison with DOD's practices, leading organizations maintain and analyze data on their contractor personnel and take a business-oriented approach to determining when to use contractor support. DOD also lacks key pieces of information that limit its ability to determine gaps in the acquisition workforce it needs to meet its missions. In addition to lacking information on contractor personnel, DOD lacks complete information on the skill sets of its in-house personnel. DOD also lacks information on the acquisition workforce it needs to meet its mission. Not having this information not only skews analyses of workforce gaps, but also limits DOD's ability to make informed workforce allocation decisions and determine whether the total acquisition workforce--in-house and contractor personnel--is sufficient to accomplish its mission. In comparison with DOD's practices, leading organizations identify gaps in the workforce by assessing the competencies of their workforces and comparing those with the overall competencies the organization needs to achieve its objectives. DOD recently initiated several efforts aimed at improving the management and oversight of its acquisition workforce, such as plans for overseeing additional hiring, recruiting, and retention activities. DOD is also planning to increase its in-house acquisition workforce by converting 11,000 contractor personnel to government positions and hiring an additional 9,000 government personnel by 2015. The success of DOD's efforts to improve the management and oversight of its acquisition workforce, however, may be limited without comprehensive information on the acquisition workforce it has and needs.
In June 1995, NASA and DOD agreed to identify cooperative actions that could lead to significant reductions in investments and cost of operations.The agencies identified seven areas of mutual interest, one of which was major aerospace test facilities—specifically, wind tunnels, aeropropulsion test cells, rocket engine test stands, space environmental simulation chambers, arc-heaters, and hypervelocity gas guns and ballistic ranges.The cooperation initiative was done under the auspices of the joint NASA/DOD Aeronautics and Astronautics Coordinating Board (AACB). Figure 1 shows the location of these test facilities. The number of active major test facilities declined from 260 in 1993 to 186 in 1996. The AACB’s major test facilities study team concluded that, in most areas, the present number of major test facilities “very nearly represents the minimum required to conduct the aeronautical- and space-related research and development programs identified for this country.” The study team further stated that (1) closing facilities without eliminating programs does not generate big savings, (2) NASA and DOD are not on a common track to developing comparable facility-cost accounting, (3) there is inadequate coordination of investments, upgrades, and operations between NASA and DOD, and (4) NASA and DOD’s rocket propulsion test facilities have excess capacity for current and future workload. To address these issues, the team recommended in April 1996 that NASA and DOD form six cooperative alliances to coordinate investment to avoid unnecessary duplication, coordinate test schedules to spread the workload across facilities, and develop standardized and common business processes. Notwithstanding a history of NASA/DOD cooperation on aerospace test facility-related issues prior to 1996, these goals collectively represent an effort to develop a broader national perspective on such issues. In September 1996, Congress added to this effort by requiring NASA and DOD to prepare a joint plan on rocket propulsion test facilities. The institutional centerpiece of future NASA/DOD cooperation on aerospace test facilities is six alliances approved by the AACB in April 1996. Twenty months later, NASA and DOD signed agreements formally establishing these alliances. However, with one exception, the new alliances did not meet regularly during that time, and the rocket propulsion alliance—which predates the cooperation initiative—met only once. The one exception was the space environmental simulation alliance, which met four times and evaluated a proposed new investment at Kennedy Space Center. The rationale given by most alliances for not meeting was the lack of an approved charter. Despite not having official charters, the space environmental simulation alliance met four times and the rocket propulsion test alliance met once between May 1996 and October 1997. The other alliances could have conducted business without formal charters, but did not. At its inaugural meeting in November 1996, the space environmental simulation alliance noted the absence of a charter, but agreed to conduct business deemed to be in the “best national interest.” The alliance also met in February, May, and August 1997. Similarly, the rocket propulsion test alliance met in October 1996 and members noted other alliances “do not appear to be meeting,” but agreed the rocket propulsion alliance “cannot wait.” As of November 30, 1997, this alliance has not met again. A NASA official told us the alliance did not meet because there was little business to discuss until NASA implemented its plan, as discussed below, to consolidate NASA’s management of rocket propulsion testing. In addition, NASA and DOD officials disagreed over who in their respective agencies should sign the alliance’s charter. An example of how the promise of closer cooperation on test facility-related issues can be met by alliances was provided by the space environmental simulation alliance in March 1997. In early 1997, officials at NASA’s Kennedy Space Center proposed to build a vacuum chamber to (1) test for leaks in the pressurized parts of the International Space Station prior to their launch and assembly in space and (2) support an environmental test capability at Kennedy. In February 1997, NASA headquarters officials asked the space environmental simulation alliance to evaluate the proposal. In March 1997, the alliance’s evaluation team concluded that there was “no compelling reason” to construct such a facility to support space station requirements. With regard to Kennedy’s proposed test capability, the team recommended a “rigorous” thermal vacuum chamber requirements and cost-benefits analysis that, in part, would include determination of the national thermal vacuum chamber capabilities. On June 25, 1997, the Kennedy Space Center introduced another approach to justify acquiring a vacuum chamber. This time, Kennedy officials solicited comments from industry, for planning purposes only, on the design, construction, and procurement methodology for a thermal vacuum chamber to simulate environments on other planets. Kennedy officials estimated the chamber would cost from $35 million to $60 million. NASA’s Office of the Inspector General is currently doing a review to determine whether (1) the alliance’s recommended cost-benefit analysis was performed, (2) the vacuum chamber is needed to support present and future NASA missions and programs, and (3) funding will be available for the project’s construction, installation, and operation. The Inspector General has not set a completion date for this review. Despite the formation of the rocket propulsion alliance, NASA’s and DOD’s relationship over this type of testing has been recently marked by competition. Partly to improve its competitive position, NASA has consolidated rocket propulsion test management in one center, but is struggling to define the center’s authority for this role. Testing engines in the next phase of the EELV program was the focus of NASA and Air Force competition. In July 1997, an EELV engine contractor provisionally selected NASA’s Stennis Space Center to test engines in the next phase of the program. Consequently, the future role of the Air Force’s test center for this program is uncertain. “resulted in facility duplication and higher overall infrastructure-related costs. Substantial investments have been made in facilities based on local insight and local funding provided by programs, institutions, and non-NASA customers rather than on an Agency-wide perspective.” In May 1996, NASA’s Associate Administrator for Space Flight unilaterally designated Stennis Space Center the center of excellence “not only for NASA, but DOD, other government agencies, academia and industry.” He noted, the “unique capabilities currently in place” at Stennis “permit us to centralize the major propulsion test facilities of NASA, DOD, and industry.” NASA’s rocket propulsion testing is managed by the Rocket Propulsion Test Management Board. It determines the location of each test, reviews investment recommendations, and establishes annual budget requirements. For example, in November 1996, the Board accepted a recommendation to relocate a 5,000 gallon high pressure liquid hydrogen tank from a component test stand at Marshall to the one at Stennis, as part of NASA’s plan to complete this facility and consolidate test capabilities at Stennis. The Board has also decided to move four other liquid oxygen tanks from Marshall. Although NASA has consolidated management of rocket propulsion testing at Stennis, it has struggled to define Stennis’ authority to make investment decisions. For example, the early goals of consolidation went beyond relocation to include mothballing and abandoning test assets as necessary to reduce or eliminate unnecessary duplication and lower costs. In January 1997, Stennis officials proposed a plan that would have greatly reduced testing at Marshall and Plum Brook; some stands would have been abandoned and others would have had their capabilities reduced and transferred to Stennis and White Sands. The draft plan was based on known requirements for NASA’s test services. But, by June 1997, NASA’s management decided to abandon Stennis’ plan rather than the test stands at other centers. Nearly all of the test stands and facilities that would have been deactivated by the January plan will remain open. According to Stennis officials, the June plan is based on possible future customers, which are estimated to be more plentiful than funded customers. The Air Force tests rocket engines at Phillips Laboratory, Edwards Air Force Base, California; and Arnold Engineering Development Center, Tennessee. In April 1997, the Air Force established the Air Force Research Laboratory consisting of Phillips Laboratory, three other laboratories, and the Air Force Office of Scientific Research. However, Arnold, as a test center, is not part of the consolidation. Phillips Laboratory’s Test Stand 1A was built in the late 1950s and has recently been altered to give it a liquid oxygen/liquid hydrogen capability. Phillip’s Test Stand 2A also has been changed for a high-pressure liquid oxygen/liquid hydrogen capability for testing engine components. So far, changes to these stands have cost about $49 million. Test stand 1A’s changes are for EELV engine testing and 2A for the government—and industry—sponsored Integrated High Payoff Rocket Propulsion Technology program to boost engine performance over the next 15 years. The federal government currently uses a fleet of expendable launch vehicles—Delta, Atlas, and Titan—to transport national security and civil satellites into space. According to DOD, these vehicles currently operate at or near their maximum performance capability. In 1994, Congress directed DOD to develop a space launch modernization plan that led to the initiation of the EELV program. On December 20, 1996, the Air Force selected McDonnell Douglas’ Delta IV and a Lockheed Martin proposal for the “preliminary engineering and manufacturing development” phase of the competition to build the Air Force’s EELVs consisting of small, medium, and heavy launchers. Lockheed Martin’s EELV will use the Russian-designed RD-180 engine to be built by Pratt and Whitney. Rocketdyne Division of Boeing North American is building the Delta IV’s first-stage RS-68 engine. In November 1996, Rocketdyne selected Phillips to test its engines in the second, or pre-engineering and manufacturing development, phase of the program. Originally, a single contractor for the third, or engineering and manufacturing development, phase of the EELV program was to have been selected in June 1998. The anticipated contract value for the third phase was $1.6 billion over approximately 6 years. However, on November 6, 1997, the Air Force announced a change in acquisition strategy to fund both Boeing’s and Lockheed Martin’s EELVs in the third phase of the program. Testing EELV engines in the next phase of the program is important to Stennis and Phillips. According to a Stennis official, Stennis has two test stands available for EELV engine testing in 1998, but without EELV engine testing, there are no identifiable customers starting in 1999 for these and another of its large test stands. And, as noted previously, the Air Force refurbished Phillip’s Test Stand 1A for EELV engine testing. This test stand has no other funded customers. Despite the Air Force’s efforts, it may have lost its EELV engine customer to NASA. On July 19, 1997, Boeing stated that it had selected Stennis to conduct development, certification, and production acceptance testing of the RS-68 engine. Boeing has not yet fully defined its test requirements, and its intention to test at Stennis is conditional pending a satisfactory resolution of such issues as the amount of Stennis’ user fees. Boeing may also test this engine on Phillip’s Test Stand 1A, but it has not made a formal commitment to do so. The rocket propulsion alliance last met in October 1996 but did not discuss such major current issues as (1) consolidation of propulsion testing at NASA or elsewhere, (2) competition between NASA and the Air Force to test engines, and (3) investment decisions. According to NASA officials, the alliance is likely to be reactive and unlikely to initiate a consolidation-related evaluation on its own. At the October meeting, NASA described the reasons for making Stennis NASA’s center of excellence for rocket propulsion testing and noted its consolidation plan would be completed by early 1997. At the time of the alliance meeting in October, NASA and the Air Force were competing to test EELV engines in the current phase of the program. Upgrades to Phillips’ test stands for EELV testing were noted at the meeting, but this investment was not critically discussed. Also not discussed was the role the alliance might play in evaluating future investment decisions or NASA’s effort to complete the component test facility at Stennis after the Air Force had started to refurbish its own component test stand at Phillips. According to a DOD official associated with the alliance, a test of its effectiveness is the ability of alliance members to review a proposed investment in test facilities. “joint plan for coordinating and eliminating unnecessary duplication in the operations and planned improvements of rocket engine and rocket engine component test facilities managed by the [Air Force and NASA]. The plan shall provide, to the extent practical, for the development of commonly funded and commonly operated facilities.” In a January 1997 response to congressional committees, DOD acknowledged that although NASA and the Air Force “do not yet have a formal plan,” a range of efforts was underway that would “form the basis for such a plan.” The efforts cited were Vision 21, the Quadrennial Defense Review, and the rocket propulsion alliance. The first two efforts cited are unlikely to form the basis of a joint plan because NASA is not a formal part of the Vision 21 review, and DOD does not intend that its 5-year plan to consolidate and restructure its laboratories and test and evaluation centers be a joint plan with other federal agencies. NASA also was not a formal part of the Quadrennial Defense Review of defense strategy. Nevertheless, NASA concurred with DOD’s response. DOD did not state in its letter whether it would prepare a joint plan for submission to Congress in the future. The rocket propulsion test alliance’s possible role in joint planning is problematical at this time in as much as the alliance has not met since October 1996 and the requirement for a joint plan was not formally discussed at the meeting. There is an additional reason why Vision 21 cannot serve as the basis of the joint plan. DOD prepared, but did not submit, a legislative package for Vision 21; instead, it opted to include consolidation of its laboratories and test and evaluation centers in future BRAC rounds. But Congress, so far, has not accepted the need for such rounds. As a consequence, Vision 21’s future is unclear until Congress either changes its position on BRAC or new guidelines for Vision 21 are developed. NASA and DOD took a step toward creating a national perspective on testing in the area of aeronautics by agreeing in May 1997 to consider joint strategic management of their test facilities. And in October 1997, NASA and Air Force officials reached a verbal understanding on the scope and approach for joint strategic management, but have yet to agree on key aspects of a management organization. Ultimately, if joint strategic management of aeronautics testing is successfully established, its adaption to other types of test facilities could be considered. The October understanding was preceded by an agreement on May 5, 1997, between senior NASA and DOD officials to discuss issues associated with joint strategic management. In so doing, they rejected the two aeronautical alliances (wind tunnels and aeropropulsion) as the way to address a variety of management and investment issues. “Each agency and Service manages its wind tunnel facilities independently. There is no structured oversight of the various facilities in the nation . . . . As a result, there is no focused approach to what the national needs are for the various facilities.” The DOD assessment team was skeptical that the two aeronautical alliances could effectively overcome this tradition of independence and recommended, in part, that DOD (1) establish a new office with NASA to manage the investment and test-technology-related funds for the nation’s core government wind tunnel facilities and (2) immediately initiate with NASA and industry a long-term program to build a new transonic wind tunnel. The DOD assessment team proposed a new organization—National Aeronautical Facility Base—with members from the three military departments and NASA. The members would reside within their parent agencies, and, in ad hoc fashion, comprise the new organization. The organization would not have authority over operations and maintenance funds, which would remain under the separate authorities of DOD and NASA. But the management organization would “make investments based on a national perspective without regard to whether the wind tunnel facility is DOD- or NASA-owned.” NASA’s aeronautical officials also were doubtful about the adequacy of the cooperative alliances, and in November 1996, before the AACB’s aeronautics panel, recommended formation of an independent organization to strategically manage selected NASA and DOD wind tunnels and aeropropulsion test cells. However, in NASA’s proposal, the new organization would receive funding from participating agencies and possibly industry, and its staff would be full-time members of the organization. In proposing their different versions of joint management, NASA and DOD officials noted that in 1994 seven European aeronautical research establishments had combined to form a joint management organization called the Association of European Research Establishments in Aeronautics, which now manages five wind tunnels in four countries. NASA and DOD officials believe relatively new European wind tunnels and the association of research establishments have combined to make Europe’s facilities especially competitive in attracting new test-related business. In October 1997, NASA and Air Force officials reached a verbal understanding on a scope and approach for a joint strategic management organization. The understanding proposes that NASA and DOD will continue to own, operate, and fund their own test facilities. The purpose of the new management organization will be to provide strategic management in four areas: (1) planning (includes making foreign competitive assessments and developing an associated strategy), (2) test technology (includes advocacy for resources), (3) operations policy (includes reviewing, coordinating, and recommending facilities’ test schedules), and (4) business management (includes, as discussed below, cost accounting and charging policy). The new organization will be under the review authority of the AACB. However, basic questions remain about strategic joint management, including the new organization’s structure and authority to make binding decisions and recommendations. NASA and DOD officials have not agreed on a charter for the new management organization. The major facilities study team recognized that consolidation of test facilities depended on the development of “consistent/comparable” cost models because currently NASA and DOD differ on the issues of how much and whom to charge for testing. Generally, NASA does not charge for use of its aeronautical test facilities, while DOD does. The major facilities study team developed some information on cost models. The team noted that although NASA’s and DOD’s “direct” and “indirect” costs were comparable at summary levels, differences over what to charge users of test services remained. In 1993, Congress gave DOD increased flexibility to adjust charges for indirect costs for commercial users of its Major Range and Test Facility Base. NASA does not charge customers of its aeronautical facilities unless they receive “special benefits” over and beyond those which accrue to the public at large. For example, NASA charges commercial customers to use its wind tunnels if their tests are not officially supported by a government contract or letter of intent, or, if so supported, they are beyond the scope of testing requested by the government. On the other hand, DOD’s Major Range and Test Facility Base charges other federal agencies and commercial customers 100 percent of direct costs and a portion of indirect costs. By agreeing to the recommendation to establish cooperative alliances, the AACB accepted the proposition that institutionalizing cooperative behavior in this way would add value to the already established cooperative relationship between NASA and DOD. Progress towards validating this proposition has been slow and sporadic. The alliances appear to offer the opportunity for an ongoing evaluation of test-related issues and cost-saving efficiencies of mutual interest to NASA and DOD, and thereby, create the basis for the testing community itself to construct a national perspective on these issues. While this perspective may be emerging in some cases, it is essentially absent in others. By not convening most alliances, the development of a national perspective from the bottom up remains largely untested. While the effect of such a delay is unclear, it may indicate that some NASA and DOD test officials do not see the alliances as having practical value, and that, with few exceptions, they would not object to continuing the pre-alliance status quo. In 1996, Congress began to push for a national perspective with the requirement for joint planning, common funding, and common operations of NASA and DOD’s rocket propulsion test facilities. NASA’s and DOD’s formal reply to this requirement was not responsive. Consequently, it may be appropriate to reaffirm and extend the search for a national perspective on test facility issues begun in the 1996 legislation. Congressional intent, as reflected in the statutory requirement for joint planning of rocket propulsion test facilities, is not being fully met by NASA and DOD. Congress may wish to consider reaffirming its intention in this regard and extend its joint planning requirement to other types of aerospace test facilities, including a requirement that NASA and DOD assess the possible extension of joint management of aeronautical facilities to other types of test facilities, especially rocket propulsion. In written comments on a draft of this report, DOD concurred that NASA and DOD need to coordinate more on infrastructure planning, but partially concurred that progress in institutionalizing cooperation was slow and sporadic. While DOD agreed that progress was slow in some areas, it believed we should give more credit to the progress that has been made. DOD noted that, without formal alliance charters, increases have occurred in interagency communications, interagency meetings on coordination of test technology, joint management alternatives and data bases, and the agencies’ understanding of each other’s policies and capabilities. DOD also partially concurred with our suggestion that Congress may wish to consider reaffirming its intention for joint planning of rocket propulsion test facilities and assess the possible extension of joint planning to other types of aerospace test facilities. DOD emphasized that it fully intends to meet congressional requirements and said that further legislation is either not needed or premature. DOD’s comments and our evaluation of them are included in appendix III. While an objective of our report is to determine the extent to which cooperative alliances have been operating on a regular basis, we recognized cooperative activities that preceded the signing of the alliances’ charters in January 1998. For example, we noted cooperation on (1) testing in subsonic wind tunnels, (2) testing the effects of icing on aircraft, (3) developing wind tunnel test technology plans, (4) discussing rocket engine test issues, and (5) boosting rocket engine performance over the next 15 years. In particular, one activity cited by DOD—joint management alternatives—is discussed in some detail. In responding to our conclusion and matter for congressional consideration, DOD did not state when it intends to comply with the statutory requirement. Therefore, because DOD and NASA have not been responsive to the congressional requirement, we believe that a reaffirmation of congressional intent, which would not necessarily require additional legislation, might be appropriate. We did not suggest that Congress extend joint management to other types of aerospace test facilities, only that Congress consider requiring an assessment of that possibility. We believe our matter for congressional consideration remains valid. In its written comments, NASA said the report could be strengthened by including updated information and identifying past cooperative activities made by the alliances. As discussed previously, we believe our report identified past cooperative activities. We updated the report where appropriate. NASA’s comments are reprinted in appendix IV. NASA and DOD also provided technical comments which we have incorporated where appropriate. To accomplish our objectives, we obtained documents from and interviewed officials at NASA headquarters in Washington, D.C.; NASA’s Langley Research Center, Virginia; Goddard Space Flight Center, Maryland; and John C. Stennis Space Center, Mississippi. We also held discussions with and obtained documents from officials in the Office of the Under Secretary of Defense for Acquisition and Technology, the Air Force’s Test and Evaluation Directorate, Washington, D.C.; the Air Force’s Phillips Laboratory Propulsion Directorate, Edwards Air Force Base, California (now part of the Air Force Laboratory, Wright Patterson Air Force Base, Ohio); the Air Force’s EELV program office, Los Angeles Air Force Base, California; and the Air Force Materiel Command’s Arnold Engineering Development Center, Arnold Air Force Base, Tennessee. To evaluate NASA and DOD’s formal cooperation, we interviewed cognizant officials about the chartering and perceived value of the test facility alliances and reviewed the minutes of all formal alliance and AACB panel meetings held between May 1996 and August 1997. With regard to competition to test EELV rocket engines, we interviewed cognizant officials and reviewed documents at Stennis Space Center and the Propulsion Directorate of Phillips Laboratory on the perceived advantages and disadvantages of each test facility in relation to EELV testing. We also discussed EELV testing with officials at the Air Force’s EELV program office and with officials of one of the EELV engine contractors. To evaluate NASA and DOD’s response to a congressional requirement to prepare a joint plan on rocket propulsion test facilities, we interviewed officials about DOD’s response and analyzed documents obtained at the Office of the Under Secretary of Defense for Acquisition and Technology. To review proposals for joint management of wind tunnels, we interviewed cognizant officials about the perceived need for a new management arrangement and reviewed joint-management proposals at the Langley and Arnold centers. We performed our work between November 1996 and December 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the NASA Administrator; the Secretary of Defense; and the Director, Office of Management and Budget. We will also make copies available to others upon request. If you or your staff have any questions, I can be reached at (202) 512-4841. Major contributors to this report are listed in appendix V. The National Aeronautics and Space Administration (NASA) and the Department of Defense (DOD) formed cooperative alliances for the following types of test facilities: Wind tunnels are used to test aerodynamic forces (lift, drag, and side force) acting on scale models of air and spacecraft in a controlled airstream at different airspeeds. The challenge to testing in a wind tunnel is the applicability of results obtained with a scale model to full-sized air and spacecraft. Figure I.1 depicts a NASA wind tunnel that consisted of three test sections fed by one power source consisting of 4 coupled electric motors capable of 180,000 horsepower when operating on a continuous basis. Aeropropulsion test cells are used to test air-breathing engines under simulated flight conditions. (See fig. I.2.) Rocket engine test stands are used to test chemical, solar, electric, and other types of rocket engines, and engine components such as fuel pumps and injector systems. Some test stands can simulate high altitudes. The test stand in figure I.3 is 160 feet high and can test engines capable of producing 1.5 million pounds of thrust. Space environmental simulation chambers are used to test spacecraft, instruments and components in ground handling, launch, and powered and orbital flight environments. Test facilities include acoustic and thermal vacuum chambers. Some simulation chambers are capable of creating a vacuum of less than one billionth of atmospheric pressure. (See fig. I.4.) Arc-heated facilities are used for two fundamental purposes: aerothermal testing of materials and structures to simulate the aerodynamic heating environment of hypersonic flight, and aeropropulsion testing of engines that operate at high velocities and temperatures. NASA tests heating of Earth and planetary entry vehicles, and DOD tests heating of ballistic and other types of missiles. The arc heated facility illustrated in figure I.5 is capable of heating gas to more than 10,000 degrees Fahrenheit and directing it under pressure at an object or material to be tested. Heat Transfer And Pressure Gauges Electrical Power (-) Hypervelocity gas guns are used for impact testing. NASA tests meteoroid/orbital debris-sized particles impacting on space structures such as the international space station. DOD tests ballistic missile intercept systems. In figure I.6, a powder charge drives the piston into trapped hydrogen, compressing it. The petal valve ruptures, forcing the projectile and sabot down the launch tube. The sabot is machined plastic that protects the launch tube from the projectile. The following are GAO’s comments on DOD’s letter dated December 23, 1997. 1. Refer to the “agency comments and our evaluation” section of the report. 2. We do not indicate that the alliances were not pursuing the intentions of their charters. 3. We did not review the basis of the Air Force’s decision to upgrade some of Phillips Laboratory’s test stands, nor did we evaluate the EELV program. The point of our discussion of EELV engine testing was to establish that the NASA/DOD relationship on rocket engine testing is defined by both competitive and cooperative behavior. We used EELV engine testing to illustrate the competitive aspect of this relationship. The congressional requirement for joint planning of rocket propulsion test facilities establishes the context of our discussion because joint planning is one possible way to manage the NASA/DOD relationship in this area. With respect to DOD’s comment on the rocket propulsion alliance, we did not state that the alliance should have reviewed the decision to upgrade Phillip’s test stands. Our point is that, in the opinion of some alliance members, a test of the alliance’s future relevance is its determination and ability to evaluate investment issues of the type that had been made at Phillips and Stennis Space Center. 4. Determining test capacity of rocket engines was not an objective of our report. We note that when DOD states that “Both NASA and Air Force officials have challenged the assumption that there is excess rocket test capacity with the two agencies,” it is, in fact, disagreeing with the conclusion of its own May 1996 report on NASA/DOD cooperative initiatives. DOD’s response does not provide specific information as to why NASA’s and DOD’s perception changed from May 1996 to October 1996 when DOD says the rocket propulsion alliance determined that there was no excess test capacity in the alliance for the next 2 years. Subsequent to DOD’s response, we analyzed the minutes of the October 1996 meeting of the rocket propulsion alliance and concluded that these minutes do not clearly reflect that a discussion on test capacity took place or that a determination about capacity was made. 5. We share DOD’s concern about the premature expansion of joint strategic management to other types of test facilities. As we stated in the report, ultimately, if joint strategic management of aeronautics test facilities is successfully established, its adaption to other types of test facilities could be considered. Ambrose McGraw Jeff Webster The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the National Aeronautics and Space Administration's (NASA) and Department of Defense's (DOD) cooperation in developing a national perspective on aerospace test facilities, focusing on: (1) the extent to which NASA/DOD working groups on major test facilities have been operating on a regular basis; (2) NASA's and DOD's actions in response to a future need to test an engine for new Air Force rockets; (3) whether NASA and DOD prepared a congressionally required joint plan on rocket propulsion test facilities; and (4) whether NASA and DOD are implementing a DOD assessment team's recommendation in March 1997 to jointly manage with NASA certain aeronautical test facilities. GAO noted that: (1) the promise of closer NASA/DOD cooperation and the development of a national perspective on aerospace test facilities remains largely unfulfilled because NASA and DOD: (a) have not yet convened most test facility alliances; (b) compete with each other to test engines for new rockets; and (c) did not prepare a congressionally required joint plan on rocket propulsion test facilities; (2) although NASA and DOD have agreed to go beyond cooperative alliances in aeronautics and jointly manage their aeronautical test facilities, they have not yet reached agreement on key aspects of management organization; (3) NASA and DOD took 20 months (May 1996 through December 1997) to negotiate and sign agreements formally establishing the six test facility-related cooperative alliances; (4) despite the formation of the rocket propulsion alliance, NASA and DOD compete against each other to test engines for new rocket programs; (5) a principal arena of competition is the next phase of the Air Force's Evolved Expendable Launch Vehicle program; (6) DOD did not prepare a legislatively mandated joint plan with NASA to coordinate rocket propulsion test facilities; (7) in a letter to congressional committee chairs and other members, DOD said that the bases of such a plan are: (a) ongoing activities such as Vision 21; (b) the May 1997 Quadrennial Defense Review of defense strategy; and (c) activities of the rocket propulsion alliance; (8) however, these efforts are unlikely to form the basis of a joint plan because NASA is not participating in either Vision 21 or the Defense Review; (9) in October 1997, NASA and Air Force officials took a step toward creating a national perspective on test facilities in the aeronautics area; (10) specifically, they reached an understanding on the scope and approach for joint strategic management of their aeronautical test facilities, including a new management organization; (11) however, they have not yet resolved basic issues, such as the organization's structure and authority; and (12) ultimately, if joint strategic management of aeronautics test facilities is successfully established, its adaption to other types of test facilities could be considered.
On November 25, 2002, President Bush signed the Homeland Security Act of 2002, as amended, officially establishing DHS, with the primary mission of protecting the American homeland. DHS became operational on January 24, 2003. On March 1, 2003, under the President’s reorganization plan, 22 agencies and approximately 181,000 employees were transferred to the new department. DHS is currently organized into the components shown in table 1, seven of which are referred to as key operational components. According to DHS, the seven key operational components lead the department’s frontline activities to protect the nation. The other remaining DHS components provide resources, analysis, equipment, research, policy development, and support to ensure that the frontline organizations have the tools and resources to accomplish the DHS mission. The department’s five key mission areas are (1) preventing terrorism and enhancing security, (2) securing and managing the nation’s borders, (3) enforcing and administering the nation’s immigration laws, (4) safeguarding and securing cyberspace, and (5) ensuring resilience to disasters. In fiscal year 2013, DHS had about 226,000 full-time equivalent staff, and its budgetary resources totaled approximately $91 billion. Over three decades ago, Congress established independent IG offices throughout the federal government as a result of growing reports of serious and widespread internal control breakdowns resulting in monetary losses and reduced effectiveness or efficiency in federal activities. The IG Act established IG offices at major departments and agencies, including DHS, to prevent and detect fraud, waste, abuse, and mismanagement in their agencies’ programs and operations; to conduct and supervise audits, inspections, and investigations; and to recommend policies to promote economy, efficiency, and effectiveness. As required by the IG Act, the DHS IG is presidentially appointed and confirmed by the Senate. The DHS OIG consists of several different component offices, as described in table 2. The IG Act provides specific protections to IG independence that are necessary largely because of the unusual reporting requirements of the IGs, who are subject to the general supervision and budget processes of the agencies they audit while at the same time being expected to provide independent reports of their work to Congress. Protections to IG independence include a prohibition on the ability of the agency head to prevent or prohibit the IG from initiating, carrying out, or completing any audit or investigation. This prohibition is directed at helping to protect the OIG from external forces that could compromise its independence. Although these protections apply to the DHS OIG, exceptions exist for the Secretary of DHS when certain sensitive information or situations are involved.independent to knowledgeable third parties is also critical when the IG makes decisions related to the nature and scope of audit and investigative work performed by the OIG. The IG’s personal independence and the need to appear An independent and reliable OIG structure firmly established at federal agencies is important for ensuring effective oversight of programs and operations. Auditors who work for OIGs are required to adhere to independence standards included in Government Auditing Standards, also known as generally accepted government auditing standards. Government Auditing Standards states that an audit organization and individual auditor must be free from impairments to independence and must avoid the appearance of an impairment. Auditors and audit organizations must maintain independence so that their opinions, findings, conclusions, judgments, and recommendations will be impartial and be viewed as impartial by objective third parties with knowledge of the relevant information. This requires staff to act with integrity, exercise objectivity and professional skepticism, and avoid circumstances or threats to independence that would cause a reasonable and informed third party to believe that an OIG’s work had been compromised. In addition, to the extent legally permitted and not inconsistent with Government Auditing Standards, federal OIGs are required, as appropriate, to adhere to the quality standards promulgated by CIGIE. These standards include requirements for OIGs, including OIG employees who work on investigations and inspections, to maintain their independence. For additional information on quality and independence standards for OIGs, see appendix II. GAO-12-331G. During fiscal years 2012 and 2013, the DHS OIG issued 361 audit and inspection reports. These reports provided coverage of all DHS key operational components, management challenges, and high-risk areas. According to the OIG, the majority of its audits and inspections were performed in response to congressional mandates or requests, while others were planned by the OIG based on factors such as the department’s strategic plan, which identifies strategic missions and priorities, and by the OIG’s annual performance plan and major management challenges identified for the department. Our review of issues addressed in the DHS OIG’s audit and inspection reports issued during fiscal years 2012 and 2013 found that the OIG provided oversight of all components that DHS has identified as its key operational components. In addition, several OIG reports covered other DHS components, multiple components, and department-wide issues, as shown in table 3. As table 3 illustrates, the majority of the OIG’s audit and inspection reports issued in fiscal years 2012 and 2013 addressed issues at the Federal Emergency Management Agency (FEMA). Specifically, 200 of these reports (55 percent) pertained solely to FEMA. Of these 200 reports, 166 reports involved audits of FEMA grants, including 118 reports on Disaster Assistance Grants. According to OIG officials, the large number of FEMA audits can be attributed to a number of factors. For example, the OIG has a statutory mandate to annually conduct audits of a sample of states and high-risk urban areas that receive grants administered by DHS to prevent, prepare for, protect against, or respond to natural disasters, acts of terrorism, and other disasters.FEMA’s programs are considered to have a higher financial risk than those of other DHS components because it distributes disaster relief funds to so many communities nationwide—for example, as of May 2014, over 135,000 applicants (e.g., local governments and nonprofit In addition, organizations to which public assistance funds are awarded) were receiving disaster assistance. Moreover, FEMA has the largest budget of all DHS components, with total budgetary resources of over $36 billion for fiscal year 2013, or about 38 percent of the department’s total budget. In addition, the OIG received $27 million from the Disaster Relief Fund in fiscal year 2013 specifically for conducting audits and investigations related to disasters. Of the 361 audit and inspection reports issued during fiscal years 2012 and 2013, we found that 355 OIG reports pertained to the department’s management challenges reported by the OIG. OIGs began annually identifying management challenges (i.e., what the IG considers to be the most serious management and performance challenges facing the agency) for their respective departments or agencies in 1997 at the request of Congress and continue to do so based on requirements of the Reports Consolidation Act of 2000.to include their IGs’ lists of significant management challenges in their annual performance and accountability reports to the President, the Office of Management and Budget, and Congress. DHS reports this information in its annual agency financial report. As shown in table 4, the OIG reports issued during fiscal years 2012 and 2013 covered all nine management challenges identified for that time period. The DHS OIG’s organizational structure includes all of the positions required by the IG Act, as well as several nonstatutory positions that help the OIG carry out its responsibilities. According to the OIG, changes to the OIG’s structure made during fiscal year 2013 were intended to help further strengthen its organizational independence. The OIG also has policies and procedures designed to carry out most of the responsibilities required by the IG Act. However, we identified three areas—coordinating with the FBI, protecting employees’ identities, and obtaining legal advice—in which the OIG’s policies and procedures could be improved to better meet its responsibilities. The IG Act requires that each OIG have certain specific positions in its organization. First, for cabinet departments and certain major agencies, it requires that each OIG be headed by an IG appointed by the President and confirmed by the Senate. The act also requires that each IG appoint or designate certain other positions, including an Assistant IG (AIG) for Audits, an AIG for Investigations, and a Whistleblower Protection Ombudsman. The IG Act also requires that each IG obtain legal advice from a counsel either reporting directly to the IG or to another IG. In addition, a special provision for the DHS OIG requires the IG to designate a senior official to carry out responsibilities related to receiving and investigating complaints that allege abuses of civil rights or civil liberties. The DHS OIG has all of the positions required by the IG Act, which are highlighted in the OIG’s organization chart in figure 1. For example, the DHS OIG has its own Office of Counsel, which is currently headed by a Deputy Counsel to the IG. Furthermore, in January 2012, the OIG issued a policy directive that designated the position of the AIG for Investigations as the Civil Rights and Civil Liberties Coordinator for investigative matters. In November 2012, the former Acting IG designated an individual within the Office of Investigations as the acting Whistleblower Protection Ombudsman with a goal of improving the availability of whistleblower protection information to employees throughout the department. In August 2013, after this function was transferred to the Office of Integrity and Quality Oversight (IQO), the same person became the permanent Whistleblower Protection Ombudsman. The DHS OIG has created several other senior positions in addition to those required by the IG Act, which are also shown in figure 1. As of July 2014, these positions include the Deputy IG and Chief of Staff within the OIG executive office, who provide executive leadership to the OIG. In addition, the OIG has AIGs to lead its Offices of Emergency Management Oversight, Information Technology Audits, Inspections, Integrity and Quality Oversight, and Management. As discussed further below, the Deputy IG position has been vacant since December 2013 and all of the AIGs report directly to the IG as of July 2014. While the OIG’s organizational structure appears reasonable and includes the positions required by the IG Act, at the time of our audit, several of its key positions had not been filled with permanent employees for extended periods of time or had only recently been filled. In November 2013, the Senate held a hearing related to this issue in which the Ranking Member of the Subcommittee on Efficiency and Effectiveness of Federal Programs and the Federal Workforce, Committee on Homeland Security and Governmental Affairs noted that vacancies in the IG and several key OIG For example, the IG positions left the agency without proper leadership.position was either occupied by an acting IG or vacant from February 27, 2011, through March 5, 2014. Specifically, with the former IG’s resignation in February 2011, the Secretary designated the Deputy IG as the Acting IG who served in that position until January 2013, when he reached the statutory time limit for serving as an acting officer.point, the Acting IG reverted back to the title of Deputy IG, while still remaining the de facto head of the OIG. Because the Deputy IG was the head of the organization during that time, a Chief Operating Officer position was created in March 2013 to assume the functions that would otherwise be carried out by the Deputy IG when the IG position was filled. The Deputy IG once again became Acting IG in November 2013, when an IG was nominated, until his resignation in December 2013. Upon the Acting IG’s resignation, the Chief Operating Officer served as the head of the OIG until a new IG was confirmed by the Senate on March 6, 2014, and assumed the post on March 10, 2014. The Chief Operating Officer retired in April 2014 and the position was eliminated in July 2014. Other executive positions within the OIG were also filled by acting staff for extended periods, including the following: The Counsel to the IG had been vacant and filled by an Acting Counsel since December 2012 and remained so as of September 2014. The AIG for Investigations was acting for 1 year before becoming permanent in May 2013. The Office of Inspections has had an Acting or vacant AIG position since August 2012, except for approximately 2-½ months in early 2013 when it had an AIG. In January 2013, the OIG’s Office of Management evaluated the organizational structure of the OIG in regard to critical issues it was facing, including two situations that occurred in 2011 and 2012 that potentially threatened the credibility and independence of the OIG. One incident, which is detailed later in this report, arose from a conflict of interest with the former Acting IG, who had a family member who worked for a DHS unit associated with six audits or inspections, and required the reissuance of the associated reports. As a result of the OIG’s Office of Management’s study of this incident, the Chief Operating Officer position was created, as previously mentioned, to take on the normal duties of the second-in-command while either the Deputy IG or IG position was vacant. This included reviewing reports from which the former Acting IG had recused himself in order to avoid future conflicts of interest and to improve OIG independence. The second incident resulted from a lack of independence of an inspection team from the Office of Investigations that was conducting an internal inspection of an Office of Investigations field office in September 2011. The Department of Justice subsequently investigated this incident, and a grand jury issued an indictment in April 2013 against two employees in the Office of Investigations field office. The defendants were accused of falsifying documents prior to the internal inspection in order to impede, obstruct, and influence the inspection and conceal from the inspectors severe lapses in the field office’s compliance with the OIG’s investigative standards and internal policies. In March 2014, a federal court found one of the employees guilty of conspiracy and five counts of falsifying government documents. As a result of the OIG’s Office of Management’s study related to this incident, the DHS OIG established IQO in the summer of 2013 to enhance its organizational independence and oversight of its operations. According to OIG officials, this office was established to foster a more efficient and responsive OIG, revitalize oversight efforts, and better serve employees. At the time of our review, IQO was taking over various quality control functions previously carried out by other OIG offices. Specifically, IQO was given responsibility for the following functions: Internal inspections of the OIG’s audit, inspection, and investigation offices, which were previously conducted by the Offices of Management and Investigations. Quality review of draft audit and inspection reports, which was previously done primarily by a report reviewer in the executive office. The same individual will continue reviewing reports but is now part of IQO. Complaint intake, including the OIG hotline and whistleblower protection functions, which was previously conducted by the Office of Investigations. Ombudsman for OIG employees, which was also previously under the Office of Investigations and at that time was only set up to address concerns of Office of Investigations personnel. OIG officials indicated that the Hotline and Whistleblower Protection functions would be in a better position to protect confidentiality and have better visibility if they were separate from the Office of Investigations as is the case in many similar organizations. In addition, IQO officials said that they were taking steps to strengthen both the Ombudsman and Whistleblower Protection functions by developing formal policies for these positions and providing training about these functions to the entire department. The establishment of IQO is a positive step toward improving the OIG’s organizational independence for the reasons cited by the OIG and because it provides a separation of duties where independence problems previously existed. The DHS OIG’s policies and procedures contained in its manuals and directives indicate that the OIG’s roles and responsibilities are generally consistent with selected requirements of the IG Act. Provisions of the IG Act include requirements that IGs carry out the following responsibilities, among others: Conduct audits and investigations related to department programs and operations. Receive and investigate complaints or information from DHS employees regarding possible violations of law, rules, or regulations, or mismanagement, waste, abuse, or substantial danger to public health and safety. Maintain oversight responsibility for internal investigations performed by DHS components, such as the U.S. Secret Service, U.S. Immigration and Customs Enforcement (ICE), and U.S. Customs and Border Protection (CBP). Establish and maintain a direct link on the home page of the OIG website for individuals to report fraud, waste, and abuse. Recommend policies for, and conduct, supervise, or coordinate activities for, promoting economy, efficiency, and effectiveness in the administration of the department’s programs and operations or preventing and detecting fraud and abuse. Report expeditiously to the Attorney General whenever the IG has reasonable grounds to believe there has been a violation of federal criminal law. As discussed further in appendix III, we found that the DHS OIG had established policies and procedures to carry out these and other selected requirements of the IG Act. Specifically, the OIG had manuals containing detailed policies and procedures to follow in carrying out its audits, inspections, and investigations. For example, the manual for the Office of Investigations—the Special Agent Handbook—included policies and procedures for receiving and investigating complaints. While most of the OIG’s roles and responsibilities are consistent with the IG Act, we identified three areas—coordinating with the FBI, protecting employees’ identities, and obtaining legal advice—in which the OIG could improve its policies and procedures to be more effective in meeting the OIG’s mission objectives. Although the OIG has a policy for notifying the FBI of any criminal investigations that it or another DHS component opens, it does not have an agreement with the FBI for sharing other information or otherwise coordinating efforts with the FBI on tips or allegations related to border corruption that are not under investigation. The IG Act requires OIGs to recommend policies for, and to conduct, supervise, or coordinate relationships between, the OIG and other federal agencies with respect to the identification and prosecution of participants in fraud or abuse. According to Attorney General Guidelines, OIGs have primary responsibility for preventing and detecting waste and abuse and concurrent responsibility with the FBI for preventing and detecting fraud and other criminal activity within their agencies and their agencies’ programs. Congress has repeatedly expressed concern about border corruption and has held hearings on this issue. For example, in June 2011, the Senate held a hearing on CBP and the OIG’s collaboration in the fight to prevent border corruption. The hearing included a discussion of OIG efforts to work with the FBI as well as CBP in preventing corruption among federal employees involved in protecting the U.S. borders. To investigate corruption among employees of agencies involved in protecting the U.S. border, the FBI has established several local border corruption task forces (BCTF), many of which operate along the southwest border. The OIG participates in local BCTFs both in the southwest and other locations. In addition, the FBI established the National BCTF to help provide guidance and oversight of the local BCTFs. To facilitate the operation of the National BCTF, in 2010, the FBI signed a memorandum of understanding (MOU) with CBP and the Transportation Security Administration. Although the OIG has agents assigned to the BCTFs, OIG officials stated that they did not sign this MOU because the ongoing dialogue has not yielded an agreement that recognizes the OIG’s statutory authority and adequately delineates the roles and responsibilities of all parties involved, including the DHS components. OIG officials also stated that they could not reach agreement on sharing information with the FBI because, in their view, the proposed agreement was not sufficient to ensure mutual information sharing between the OIG and the FBI. Since at least 2010, the OIG and FBI have been working to reach agreement on an MOU to describe how the parties would work together, along with other DHS components, on the National BCTF. OIG officials told us that since a permanent AIG for Investigations was designated in May 2013, he has been making efforts to improve relationships. For example, the AIG for Investigations has had more meetings with the FBI and DHS components involved in the National BCTF, such as CBP and ICE, to discuss coordination and information-sharing efforts as well as ways to reach agreement on a BCTF MOU. The AIG for Investigations is also in support of the National BCTF and is working diligently with the FBI to reach an agreement, but stated that he believes that the information to be shared needs to be clearly defined and that the FBI must also agree to share information. Until the OIG and FBI reach agreement on working collaboratively, they are at risk of duplicating efforts or missing opportunities for taking effective action against border corruption. Recognizing this risk, the Senate Committee on Appropriations report for the DHS appropriations bill, 2014, directed the DHS Deputy Secretary—jointly with the OIG, CBP, and ICE—to report not later than March 17, 2014, on specific steps being taken to further address the process for investigating cases of corruption and the plan to work as a unified DHS with the FBI’s BCTF. In addition, the Senate Committee on Appropriations report for the DHS appropriations bill, 2015, directed the DHS Deputy Secretary—jointly with the OIG, CBP, and ICE—to submit a status update report on the same issues no later than 60 days after the enactment of the DHS Appropriation Act, 2015. The OIG receives and reviews complaints filed by DHS employees and the public regarding allegations of misconduct, including criminal misconduct, by DHS employees. It receives these complaints both directly from employees as well as through DHS components, where some complaints are initially filed. However, the OIG’s process for handling complaints that it receives directly from employees does not include adequate internal controls to provide reasonable assurance that the identities of employees who file complaints and request confidentiality will be protected. The IG Act requires that IGs shall not, after receipt of a complaint or information from an employee, disclose the identity of the employee without his or her consent unless the IG determines that such disclosure is unavoidable during the course of an investigation. Consistent with the IG Act, OIG policy requires not disclosing the identities of employees who file complaints. However, the OIG’s process for recording complaints and forwarding them to DHS components when necessary involves certain manual procedures usually carried out by only one OIG complaint intake employee per complaint without supervisory review. This process can be subject to human error; consequently, the OIG is at higher risk of not being able to ensure that employees’ identities will be protected. During fiscal year 2013, according to OIG data provided to us, about 50 percent of the complaints received by the OIG came through the Joint Intake Center run by CBP and ICE and another 6 percent were received from other DHS components. The rest of the complaints were received directly from complainants through the OIG’s website or by e-mail, fax, or phone. Complaints received through the website (about 15 percent) are automatically recorded into the OIG’s complaint database, the Enforcement Data System, while complaints received through all other means, including those from the Joint Intake Center, must be manually recorded in the Enforcement Data System. Although the transfer of complaint information from the OIG’s website into the Enforcement Data System is more automated than complaints from other sources, the website lacks certain controls that could result in the inadvertent disclosure of employees’ identities. For example, when filing a complaint on the website, if an employee does not want to allow full disclosure of his or her identity, an employee can choose to be (1) anonymous or (2) confidential. However, the website lacks certain controls over these two categories. For example, see the following: The anonymous category is intended to mean that the complainant does not have to supply any identifying information. To select anonymous, a complainant can check a box on the website. However, after complainants make this selection, the website still allows complainants to provide their names and other personal information, which could potentially be disclosed. The confidential category is intended to mean the employee provides identifying information but does not want it released outside of the OIG. If an employee wants confidentiality rather than anonymity, the employee must write in the request for confidentiality on the website form because the website does not have a box to check for this type of complaint. As a result, even though the Enforcement Data System includes a “complaint confidential” box, a request for confidentiality on the website does not automatically record this request in the Enforcement Data System. Therefore, requests for confidentiality depend on the analyst who subsequently reviews the complaint manually marking the correct box in the system. Regardless of how a complaint is received, when a complaint is recorded in the Enforcement Data System, it is initially reviewed by a complaint intake analyst who determines if the complaint should be forwarded to a field office of the OIG Office of Investigations in the complainant’s jurisdiction. If it is forwarded, an OIG investigator in that office reviews the complaint and decides whether to open an investigation on it. If the OIG decides not to investigate the complaint, and the complaint was received through the Joint Intake Center or another DHS component, a complaint intake analyst will usually send it back to the component without first notifying the complainant since the component has already had access to the complaint information. However, if the complaint was received directly from an employee rather than through a DHS component (e.g., OIG website, e-mail, fax, or phone), the intake analyst will send an e-mail to the employee requesting his or her consent to forward the complaint to the DHS component. If the employee provides consent, the analyst forwards the complaint to the component via e-mail. If an employee does not provide consent, the complaint may still be sent to the component but without any identifying information. However, there is no secondary check of this manual process, such as a supervisory review. As a result, the protection of an employee’s identity depends on the accuracy and integrity of the individual processing the complaint. Without additional controls, the OIG could inadvertently disclose employees’ identities without their consent. Such disclosure can put employees at risk of retaliation by the DHS components in which they work and could discourage other employees from filing complaints. Officials in IQO said that they were aware of these issues and that they were developing standard operating procedures and considering improvements to the website and the Enforcement Data System to better protect employees’ identities, but they did not indicate when these would be complete. While the OIG has its own Office of Counsel, it sometimes also obtains advice or discusses legal issues with the department’s counsel. As previously discussed, the IG Act requires that an IG obtain legal advice from either a counsel reporting directly to the IG or a counsel reporting to another IG. However, the act does not preclude an IG from obtaining advice from its respective department’s counsel if appropriate. For example, it may be appropriate for the IG to consult the department’s counsel for legal advice regarding certain personnel practices if the OIG is subject to the human capital laws or regulations of the department or regarding ethics issues if the designated ethics official is a department employee. In its recent report on an investigation into allegations of misconduct by the former Acting IG, the Subcommittee on Financial and Contracting Oversight, Senate Committee on Homeland Security and Governmental Affairs, reported that the former Acting IG inappropriately sought advice from the department’s counsel, according to former OIG officials.interviewed former and current OIG officials and reviewed various e-mails provided that indicated the former Acting IG sought legal advice from a counsel at the department on several occasions. In general, we found the consultations were likely acceptable and appropriate as the legal matter appeared to be under the department’s purview. However, in one e-mail the former Acting IG stated that he had lost confidence in his counsel and wanted legal help from a counsel at the department for the next 4 months. Because the former Acting IG’s request did not specify the type of legal assistance needed, it was not clear whether his request or any subsequent legal assistance received was for appropriate matters. The OIG did not have a policy stating the legal requirement for the IG to obtain legal advice from either the IG’s counsel or from counsel reporting to another IG, nor did it have any guidelines specifying the circumstances in which it would be appropriate for the OIG to consult with the department’s counsel. Having such a policy and guidelines could help avoid any potential impairment to independence when seeking legal advice. The OIG’s Deputy Counsel stated that he was considering whether to develop a policy and that he has asked a working group to draft guidelines on consultations between the OIG and the department’s counsel. Independence is one of the most important elements of a strong OIG function as it helps to ensure that an IG’s work is viewed as impartial by agency personnel, Congress, and the public. The DHS OIG’s policies and procedures indicated that the OIG staff were expected to comply with both generally accepted government auditing standards (GAGAS) and CIGIE independence standards. Each of the OIG’s oversight functions— audits, inspections, and investigations—had its own manual of policies and procedures that was consistent with applicable independence standards. However, the procedures for ensuring compliance with these standards varied among the different functions. Three offices within the DHS OIG conduct audits—the Offices of Audits, Information Technology Audits, and Emergency Management Oversight. Officials from each of these offices said that their operations are guided by DHS’s OIG Audit Manual, which refers to GAGAS and CIGIE independence standards and provides guidance consistent with these standards. In addition, each of their audit reports includes a statement about the audit’s compliance with GAGAS. The OIG Audit Manual describes the OIG’s quality assurance and control system that, according to the manual, provides the controls necessary to ensure that GAGAS audits are conducted in accordance with applicable auditing standards and internal policies and guidance. For example, the manual requires supervisory and management reviews of audit documentation and resulting reports prior to issuance. The manual states that the AIG for each audit office is responsible for ensuring implementation of these controls. Other parts of the OIG’s quality control system described in the manual include annual internal quality control reviews of audit offices and reports, as well as external peer reviews of audit operations by an independent federal OIG at least once every 3 years. The OIG Audit Manual describes how OIG audit staff should document their independence. For auditors at the director level and below, the manual requires staff to document their independence biweekly in the OIG’s Time and Tracking System. For senior executives, including the IG, Deputy IG, Chief of Staff, AIGs, and others involved in reviewing audit reports, the manual requires documentation of their independence once a year when they complete ethics training and financial disclosure forms. The manual further states that each office will maintain these annual certifications of independence, but does not require the certifications to be centrally maintained and monitored for compliance. Procedures for each of the three audit offices regarding annual certificates varied. For example, the Office of Audits said that it maintained annual independence certificates for its division directors as well as senior executives, while the Office of Emergency Management Oversight only maintained them for its senior executives (the AIG and Deputy AIG). The Office of Information Technology Audits said that instead of signing annual certifications, its senior executives sign independence certifications for every audit they work on, consistent with procedures for its audit staff. While the audit offices told us they maintained independence certificates centrally within each office, the OIG executive office, which consisted of four senior executives at the time of our audit, indicated that it did not do so. In response to our February 2014 request for copies of certificates for the senior executives in the front office, an OIG official gathered certificates from the individuals. Two of the certificates were signed after we requested them, and according to the official, a certificate could not be found for the Acting IG who had recently left the OIG. Similarly, both the internal quality reviews and peer reviews conducted from 2009 through 2012 identified instances in which employees, including some senior executives, had not documented their independence in accordance with the OIG Audit Manual, although these instances were not deemed significant enough to prevent the audit offices from passing these reviews. If senior officials do not comply with the OIG Audit Manual requirement to sign annual certificates of independence or maintain them centrally in order to monitor compliance, they might not be aware of potential threats to independence or be as familiar with GAGAS or CIGIE independence requirements as they should be. For example, as a result of an impairment to the former Acting IG’s independence that was not identified in a timely manner, several audits and inspections were affected in fiscal year 2012. Specifically, a total of six reports from the Offices of Audits, Information Technology Audits, and Inspections were reissued to include a statement about an impairment of independence in appearance resulting from a family member of the former Acting IG being employed by an entity within DHS associated with these audits and inspections. In addition, because of the same impairment, one other audit was terminated without a report being issued. According to audit staff, they inadvertently became aware of the former Acting IG’s impairment in July 2012 as a result of an audit official hearing about the employment of the family member from a source outside of the OIG. In addition, a number of other allegations of abuse of power and misconduct were made against the former Acting IG, including failure to uphold the independence and integrity of the DHS OIG office. These issues, among others, were to be the subject of a Subcommittee on Financial and Contracting Oversight, Senate Committee on Homeland Security and Governmental Affairs, hearing scheduled for December 2013 that was canceled after the former Acting IG resigned on December 16, 2013, and transferred to another office within DHS. In April 2014, the subcommittee issued a report on its investigation into the This investigative allegations of misconduct by the former Acting IG.report detailed the subcommittee’s findings of its investigation into the allegations, including some actions that it said jeopardized OIG independence. For example, the subcommittee found that the former Acting IG had directed the alteration or delay of some reports at the request of senior DHS officials and that he did not obtain independent legal advice. Subsequent to the issuance of the subcommittee report, the new DHS Secretary put the former Acting IG on administrative leave pending other investigations. While signing a certificate of independence might not have prevented these independence issues, it could nevertheless serve as a reminder of individual responsibilities and potential threats to independence. Further, because certificates of independence are not maintained centrally, management’s ability to monitor compliance with independence requirements and obtain reasonable assurance that controls are in place and operating as intended is hindered. However, as stated in Standards for Internal Control in the Federal Government, no matter how well designed and operated, internal control cannot provide absolute assurance that all agency objectives will be met. Factors outside the control or influence of management, such as human error or acts of collusion to circumvent controls, can render policies and procedures ineffective. According to OIG officials, the DHS OIG’s Office of Inspections follows guidance in CIGIE’s Quality Standards for Inspection and Evaluation. In addition, DHS’s OIG Inspections Manual states that inspectors are expected to be knowledgeable of, and to abide by, the performance standards, including independence, described in CIGIE’s quality standards. The OIG Inspections Manual includes guidance on independence that is consistent with CIGIE standards although not as detailed. For example, the manual states that while conducting an inspection, all team members have a responsibility to maintain independence so that opinions, conclusions, judgments, and recommendations will be impartial and viewed as such by knowledgeable third parties. Neither CIGIE’s quality standards nor the OIG Inspections Manual requires external peer reviews of OIG inspection activities. However, CIGIE’s quality standards states that an OIG should have quality control mechanisms that provide an independent assessment of inspection processes and work. To help assess the inspection processes and work, in 2006, the AIG for Inspections requested an independent external review of three inspections. Each inspection was reviewed by a different OIG, and the reviews focused on the quality of work performed for each inspection. None of the reports from these external reviews identified any independence concerns. More recently, in February 2014, the OIG completed its first internal quality review of the Office of Inspections. The resulting report indicated that the Office of Inspections materially complied with CIGIE’s quality standards, and the report did not identify any issues with respect to independence. According to DHS OIG officials, OIG investigations are conducted in accordance with CIGIE’s Quality Standards for Investigations. In addition, the Office of Investigations has its own manual—the Special Agent Handbook—which states that investigations will be conducted in accordance with CIGIE’s standards as well as Attorney General Guidelines for OIGs. CIGIE standards for investigations include requirements on independence that are similar to those for audits. The Special Agent Handbook includes guidance on ethical conduct and conflicts of interest, specifically stating that if a special agent suspects that a conflict of interest exists or that there might be an appearance of a conflict of interest, the agent should notify the Special Agent in Charge as soon as possible. The DHS OIG Office of Investigations passed peer reviews by the Social Security Administration OIG and the Department of Defense OIG in 2009 and 2013, respectively. Neither of these peer reviews included any issues regarding independence or conflicts of interest. In addition, the Office of Investigations conducted its own internal inspections in the past. None of the internal inspection reports from fiscal years 2011 and 2012 identified any issues with respect to independence. However, because of serious concerns about the independence of the internal inspection function carried out by the Office of Investigations as previously discussed, the OIG decided to create IQO in 2013 and move this function to that office. OIG officials told us that the Special Agent Handbook is being updated, in part, as a result of the transfer of some functions, such as complaint intake, to IQO. The DHS OIG’s organizational structure has the positions required by the IG Act to carry out its various responsibilities. However, the lack of an appointed IG for 3 years may have contributed to a number of other senior positions being filled with “acting” individuals for extended periods of time. The OIG has made some meaningful changes to its structure to try to address concerns about the integrity and independence of the OIG. The establishment of IQO is intended to enhance the OIG’s ability to carry out functions such as complaint intake and whistleblower protection, including helping to ensure that these functions maintain their independence. However, the DHS OIG process for recording and referring complaints received directly from DHS employees does not provide reasonable assurance that employees’ identities are protected, as required by the IG Act. In addition, the important role that the OIG plays in receiving and investigating allegations of criminal activity within the department can be enhanced through better coordination and information sharing with the FBI. OIG officials intend to continue working toward improved coordination with the FBI and provided input to a DHS report with their plans, as directed by the Senate. In light of the congressional direction to DHS, we are not making a recommendation on this matter at this time. Furthermore, although the IG has its own counsel as required by the IG Act, the OIG did not have a policy to address the legal requirements related to obtaining advice from legal counsel. While the DHS OIG has appropriate policies and procedures in place to comply with independence standards, even the best control procedures cannot always guarantee that standards are being met. Although the former Acting IG was required to sign an annual certificate of independence reminding him of independence requirements, the OIG did not have a policy that these certificates be collected and maintained centrally, which would improve management’s ability to monitor compliance with independence requirements. We recommend that the OIG take the following three actions: To improve its intake and complaint processing function, we recommend that the OIG design and implement additional controls, which may include additional automated controls and supervisory review, to provide reasonable assurance that employees’ identities are not disclosed without their consent when forwarding complaints to DHS components. To help avoid any impairment to independence when seeking legal advice, we recommend that the OIG develop a policy for obtaining legal advice from counsel reporting either directly to the IG or another IG and work with the IG’s counsel to establish guidelines for when it would be appropriate for the OIG to consult with the department’s counsel. To help ensure that senior executives are aware of independence requirements so that they are able to identify and mitigate any threats to their independence, we recommend that the OIG revise its guidance in the OIG Audit Manual to require signed annual certificates of independence to be collected and maintained centrally in order to monitor compliance. We provided a draft of this report to the DHS Inspector General for review and comment. In written comments, which are reprinted in appendix IV, the OIG generally agreed with our conclusions and concurred with all of our recommendations. In addition, the OIG provided technical comments that we incorporated into the report as appropriate. We also provided the FBI with an excerpt of the draft report related to its coordination with the OIG on border corruption investigations. The FBI did not have any comments. The OIG described actions planned and under way to address each of our recommendations. Specifically, the OIG stated that it is revising its online allegation form to (1) ensure complainants do not inadvertently provide personal data when their intent was to remain anonymous and (2) simplify and clarify to complainants the distinctions between filing an anonymous complaint versus filing a confidential complaint. The OIG anticipates completing the changes to the online allegation form within 90 days of its letter. According to the OIG, it has also implemented supervisory controls to provide oversight and quality control of intake analysts’ work, including the movement or referral of complaints to DHS components. The OIG also stated that it plans to develop policy and guidelines within 90 days of its letter to describe the circumstances under which the IG can obtain legal advice from DHS’s Office of Legal Counsel. In addition, the OIG plans to revise its audit manual by October 31, 2014, to require the Office of Integrity and Quality Oversight to collect and maintain senior executives’ signed annual independence statements. If implemented effectively, the OIG’s planned actions should address the intent of our recommendations. We are sending copies of this report to appropriate congressional committees, the Secretary of Homeland Security, and the DHS Inspector General. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-9869 or khana@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. The joint explanatory statement accompanying the Department of Homeland Security (DHS) Appropriations Act, 2013, directed GAO to review the Office of Inspector General’s (OIG) organizational structure and whether its audit, investigation, and inspection functions are organizationally structured to ensure that independence standards are met. To meet the committee’s time frames established in the explanatory statement, we briefed committee staffs on our preliminary observations in November 2013. This report provides the results of our review to determine (1) the coverage that the OIG’s audits and inspections provided of DHS’s key component agencies, management challenges, and high-risk areas; (2) the extent to which the OIG’s organizational structure and roles and responsibilities were consistent with the Inspector General Act of 1978, as amended (IG Act); and (3) the extent to which the design of OIG’s policies and procedures for planning, reviewing, and reporting on audit, inspection, and investigation results was consistent with applicable independence standards. To determine the coverage that DHS OIG’s audits and inspections provided of DHS’s key component agencies, management challenges, and high-risk areas, we reviewed the DHS agency financial reports for fiscal years 2012 and 2013 to identify DHS’s key operational components. To identify the department’s management challenges, we reviewed the OIG’s annual performance plans for fiscal years 2012 and 2013. To identify the high-risk areas for DHS, we reviewed GAO’s High-Risk Series for 2013. To identify DHS OIG reports issued during fiscal years 2012 and 2013 and the subjects covered in those reports, we reviewed DHS OIG’s semiannual reports. In conducting our analysis, we did not review the coverage provided by investigations because they are generally conducted based on specific allegations of misconduct received and are not planned in advance as audits and inspections are. We compared the agencies and subjects covered in these audit and inspection reports with DHS’s key component agencies, management challenges, and high-risk areas to determine DHS OIG’s oversight coverage of DHS. We also interviewed knowledgeable DHS OIG officials to obtain their comments about the results of our analysis. To determine the extent to which the organizational structure, roles, and responsibilities of the DHS OIG were consistent with the IG Act, we reviewed DHS OIG’s organization chart, office descriptions, policies related to its roles and responsibilities, memorandums of understanding, and other relevant documents. We also interviewed former and current DHS OIG officials, as well as officials from DHS components, including officials from U.S. Customs and Border Protection, U.S. Immigration and Customs Enforcement, and the Federal Emergency Management Agency, and officials from the Federal Bureau of Investigation (FBI). We analyzed the information from interviews and documentation to determine whether there were any inconsistencies between the information and the applicable IG Act requirements and the reasons for any such inconsistencies. Our review focused primarily on OIG documentation and discussions with OIG officials to assess the design of the OIG’s stated roles and responsibilities. For the most part, we did not review the implementation of these roles and responsibilities except for certain issues that came to our attention—specifically, the lack of a memorandum of understanding with the FBI, the procedures for protecting employee confidentiality, and the nature of OIG consultations with the department’s legal counsel. To determine the extent to which the design of DHS OIG’s policies and procedures for planning, reviewing, and reporting on audit, inspection, and investigation results was consistent with applicable independence standards, we interviewed DHS OIG officials and reviewed applicable policy, procedure, and planning documents, such as the OIG Audit Manual, OIG Inspections Manual, and Special Agent Handbook. We also reviewed peer review reports of audit and investigative operations as well as reports of internal inspections of investigative activities. We assessed the design of the DHS OIG’s policies and procedures for planning, reviewing, and reporting on audit, investigation, and inspection results and determined whether the design was consistent with applicable independence standards found in Government Auditing Standards and the standards published by the Council of the Inspectors General on Integrity and Efficiency that include Quality Standards for Federal Offices of Inspector General, Quality Standards for Inspection and Evaluation, and Quality Standards for Investigations. We also requested information about which employees signed annual certifications of independence and requested copies of these certifications for selected staff. We conducted this performance audit from June 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal inspectors general (IG) have various quality standards to which their offices must adhere in performing their work. The Quality Standards for Federal Offices of Inspector General (referred to as the Silver Book), adopted by Council of the Inspectors General on Integrity and Efficiency (CIGIE), provides the overall quality framework for managing, operating, and conducting the work of offices of inspector general (OIG) and covers critical topics such as independence, ethics, and confidentiality. These quality standards also address planning and coordinating work, maintaining quality assurance, and ensuring internal control. In addition to the Silver Book, federal OIGs have standards for specific types of work. For audits, the Inspector General Act of 1978, as amended requires IGs to carry out work in accordance with generally accepted government auditing standards published in Government Auditing Standards. For investigations and inspections, CIGIE has developed additional standards: Quality Standards for Investigations and Quality Standards for Inspection and Evaluation. Government Auditing Standards and CIGIE standards contain provisions covering independence. Government Auditing Standards establishes a conceptual framework that can be used to identify, evaluate, and apply safeguards (such as removing an individual from an audit team or consulting an independent third party) to eliminate the threats to independence or reduce them to an acceptable level. Under Government Auditing Standards, IGs should evaluate the following broad categories of threats to independence: 1. Self-interest: The threat that a financial or other interest will inappropriately influence an auditor’s judgment or behavior. 2. Self-review: The threat that an OIG employee or OIG that has provided nonaudit services will not appropriately evaluate the results of previous judgments made or services performed as part of the nonaudit services when forming a judgment significant to an audit. 3. Bias: The threat that an OIG employee will, as a result of political, ideological, social, or other convictions, take a position that is not objective. 4. Familiarity: The threat that aspects of a relationship with management or personnel of an audited entity, such as a close or long relationship or that of an immediate or close family member, will lead an OIG employee to take a position that is not objective. 5. Undue influence: The threat that external influences or pressures will affect an OIG employee’s ability to make independent and objective judgments. 6. Management participation: The threat that results from an auditor’s taking on the role of management or otherwise performing management functions on behalf of the entity undergoing an audit. 7. Structural: The threat that an OIG’s placement within a government entity, in combination with the structure of the government entity being audited, will affect the OIG’s ability to perform work and report results objectively. CIGIE’s Quality Standards for Federal Offices of Inspector General closely mirrors the independence language in Government Auditing Standards. For example, it states that IGs and their staffs must be free both in fact and in appearance from personal, external, and organizational impairments to their independence. Further, the IGs and their staffs have a responsibility to maintain independence so that opinions, conclusions, judgments, and recommendations will be impartial and viewed as impartial by knowledgeable third parties. CIGIE also addresses the same seven categories of threats to independence as Government Auditing Standards. CIGIE’s standards specifically for investigations and inspections also include language about the importance of independence; however, they contain far less detail about independence than the overall standards. The Inspector General (IG) Act of 1978, as amended requires the independent IGs at major departments and agencies, including Department of Homeland Security (DHS), to carry out specific roles and responsibilities. Table 6 summarizes selected requirements and the DHS Office of Inspector General’s (OIG) procedures for carrying out these requirements. In addition to the contact named above, Michael LaForge (Assistant Director), Fred Evans, Latasha Freeman, Maxine Hattery, Colleen Heywood, Jackson Hufnagle, Kristi Karls, and Laura Pacheco made significant contributions to this work. Kathryn Bernet, Jacquelyn Hamilton, and Helina Wong also contributed to this report.
The DHS OIG plays a critical role in strengthening accountability throughout DHS. The OIG received about $141 million in fiscal year 2013 appropriations to carry out this oversight. The joint explanatory statement to the Department of Homeland Security Appropriations Act, 2013, directed GAO to review the OIG and its organizational structure for meeting independence standards. This report examines (1) the coverage the OIG's audits and inspections provided of DHS's component agencies, management challenges, and high-risk areas; (2) the extent to which the OIG's organizational structure, roles, and responsibilities were consistent with the IG Act; and (3) the extent to which the design of the OIG's policies and procedures was consistent with applicable independence standards. To address these objectives, GAO obtained relevant documentation, such as selected reports and OIG policies and procedures, and compared this information to the IG Act and independence standards. GAO also interviewed officials from the OIG, DHS components, and the FBI. During fiscal years 2012 and 2013, the Department of Homeland Security's (DHS) Office of Inspector General (OIG) issued 361 audit and inspection reports that collectively cover key components, management challenges identified by the OIG, and relevant high-risk areas identified by GAO. Of the 361 reports, 200 pertained solely to the Federal Emergency Management Agency (FEMA)—the DHS component with the largest budget. Of those FEMA reports, 118 reports involved audits of disaster assistance grants. The OIG's organizational structure, roles, and responsibilities are generally consistent with the Inspector General (IG) Act of 1978, as amended (IG Act). In 2013, the OIG made changes to its structure to enhance independence and oversight, including establishing an Office of Integrity and Quality Oversight. However, areas for improvement exist for the OIG to better meet its responsibilities. The OIG has not reached agreement with the Federal Bureau of Investigation (FBI) on coordinating and sharing border corruption information. The IG Act requires OIGs to recommend policies for and to conduct, supervise, or coordinate relationships with other federal agencies regarding cases of fraud or abuse. The Senate Appropriations Committee directed DHS to report jointly with the OIG and other DHS components on plans for working with the FBI. The OIG lacks adequate controls to protect identities of employees filing complaints because its process for recording complaints involves significant manual procedures, without review, that can be subject to human error. The IG Act requires that OIGs not disclose the identity of an employee filing a complaint without the employee's consent unless such disclosure is unavoidable during the course of an investigation. The OIG is aware of these issues and is developing standard operating procedures. The OIG does not have a policy for obtaining legal advice from its own counsel or guidelines specifying when it is appropriate to consult with the department's counsel. The former Acting IG requested legal help from a counsel at the department for 4 months, and it was not clear if this request was for appropriate matters. The IG Act requires the IG to obtain legal advice from a counsel reporting directly to the IG or another IG. The OIG Deputy Counsel has asked a working group to draft guidelines on consultations with the department's counsel. The OIG's policies and procedures are consistent with independence standards. However, OIG senior executives did not always comply with the policy to annually complete certificates of independence. Because the OIG does not centrally maintain the certifications, management's ability to monitor compliance is hindered. For example, no certificate of independence could be found for the former Acting IG. As a result of an impairment to the former Acting IG's independence that was not identified in a timely manner, the OIG had to reissue six reports for fiscal year 2012 to add an explanatory statement about the impairment. External peer reviews of the OIG's audit function, completed in 2009 and 2012, also found that OIG staff, including senior executives, had not documented their independence as required. GAO is making three recommendations for improving controls over processing complaints, obtaining legal advice, and monitoring compliance with independence standards. The IG concurred with GAO's recommendations and described actions being taken to address them.
Businesses can raise capital in the regulated securities markets through the public offering of securities, which is an offering of stock to the general public. For a small business, this could take the form of a registered public offering—an offering and sale of securities to the general public. Unless subject to a specific exemption, the Securities Act requires a business selling its securities to file a registration statement with SEC that includes a prospectus that discloses, among other things, the business’s operations, financial condition, security offering, risk factors, and management. Businesses that qualify as smaller reporting companies under SEC rules can file using disclosure requirements scaled for small businesses. The Securities Act requires that information provided to investors in connection with the offer or sale of the business’s securities include material information necessary to make an investment decision. SEC’s Division of Corporation Finance (Corporation Finance) reviews registration statements for compliance with disclosure and accounting requirements. Corporation Finance does not evaluate the accuracy of disclosure, the merits of any transaction, or determine whether an investment is appropriate for any investor. According to SEC, this review process is not a guarantee that the disclosure is complete and accurate— responsibility for complete and accurate disclosure lies with the business and others involved in the preparation of a business’s filings. Through the course of its review, Corporation Finance may issue comments to a company to elicit compliance with applicable disclosure requirements. In response to those comments, a business may revise its financial statements or amend its disclosure to provide additional information. According to SEC, this comment process is designed to provide investors with better disclosure necessary to make informed investment decisions, thus enhancing investor protection, facilitating capital formation, and enhancing the efficiency of the capital markets. When a business has resolved all comments from Corporation Finance on a Securities Act registration statement, the business may request that SEC declare the registration statement effective so that it can proceed with the transaction. A business cannot sell its securities until SEC declares the registration statement effective. The Small Business Investment Incentive Act of 1980 requires SEC to conduct an annual forum on small business capital formation. In 2011, SEC held its 30th forum. According to the resulting report on the forum, a major purpose of the forum is to provide a platform to highlight perceived unnecessary impediments to small business capital formation and address whether they can be eliminated or reduced.to develop recommendations for government and private action to improve the environment for small business capital formation, consistent with other public policy goals, including investor protection. The report made a number of recommendations, including a few related to Regulation A, such as raising the ceiling to $50 million and preempting Regulation A offerings from state blue sky law registration requirements. Regulation A represents an exercise by SEC of its authority under section 3(b) of the Securities Act to exempt offerings of securities from registration if it finds that registration “is not necessary in the public interest and for the protection of investors by reason of the small amount involved or the limited character of the public offering….” SEC has previously stated that the primary purpose in adopting Regulation A was to provide a simple and relatively inexpensive procedure for small business use in raising limited amounts of needed capital. A business that relies on Regulation A must (1) file for SEC staff review an offering statement that includes an offering circular and financial statements, and (2) provide the offering circular to investors. The offering statement also includes a notification and exhibits. The offering circular is expected to include, among other things, information on the company; officers, directors and key personnel; risk factors; use of proceeds; and plan for distributing the securities. SEC staff review the initial offering statement (for example, to determine if it complies with disclosure requirements) and determines if the offering is qualified (i.e., cleared by SEC). We discuss the review process in more detail later in this report. Like securities sold in registered offerings, Regulation A securities can be offered publicly and are freely tradable in the secondary market. In addition, Regulation A securities can be sold to both accredited and nonaccredited investors. Accredited investors include, among others, individuals whose net worth is more than $1 million (not including the value of their primary residence) or whose individual income exceeds at least $200,000 for the most recent 2 years and certain institutional investors, such as insurance companies, banks, and corporations with assets exceeding $5 million. Conversely, nonaccredited investors include any investor that does not meet the definition for an accredited investor. Although Regulation A offerings are generally subject to state blue sky laws, state exemptions for certain offerings might apply to a Regulation A issuer. The Uniform Securities Acts of 1956 and 2002, which form the basis for many blue sky laws, provide a series of exemptions from state- level registration for certain types of securities or transactions. For example, one exemption applies to sales to institutional investors, federally covered investment advisors, and other purchasers exempted by a state rule. Furthermore, the JOBS Act preempts state registration requirements for offerings under the new version of Regulation A if the securities are sold on a national securities exchange or to “qualified purchasers” as defined by SEC. Regulation D is designed to (1) eliminate any unnecessary restrictions that SEC rules place on small business issuers and (2) achieve uniformity between state and federal exemptions to facilitate capital formation consistent with protecting investors. Regulation D contains three separate but interrelated exemptive rules—Rules 504, 505, and 506—that allow some businesses to offer and sell their securities without having to register the securities with SEC.exemptions differ in relation to the size of the offerings to which they apply or the number and type of investors to which offerings may be made. As the following illustrates, the Rule 504 has a maximum offering amount of $1 million in any 12- month period and generally does not limit the number or type of investors. Rule 505 has a maximum offering amount of $5 million in any 12- month period, and the sales are limited to 35 nonaccredited investors and an unlimited number of accredited investors. Rule 506 has no dollar limitation and offerings can be sold to up to 35 nonaccredited, sophisticated investors and an unlimited number of accredited investors. While businesses do not have to register Regulation D offerings with SEC, they must notify SEC of initial sales in the offering. SEC does not comment on or approve these notifications. Businesses that make offerings under Rules 504 or 505 must register them at the state level if required in the state in which they are made, while offerings made under Rule 506 are preempted from state registration by the National Securities Markets Improvement Act of 1996. In addition to federal securities laws, state securities laws are designed to protect investors against fraudulent sales practices and activities. While these laws can vary from state to state, they require securities issuers (including businesses making small offerings) to register their offerings with the state before the offerings can be sold in that state unless state registration for the offering has been preempted by federal law or a state registration exemption applies. According to state securities administrators with whom we met, blue sky laws are beneficial because they provide an additional layer of protection for potential investors. Moreover, for states that have the statutory authority to assess the merit of an offering, the state can assess the extent to which the offering is fair to potential investors, and require the business to address the state’s concerns before the offering is registered. The number of Regulation A offerings filed and qualified has declined significantly after peaking in fiscal years 1997 and 1998 respectively (see fig. 1). The number of initial Regulation A offerings filed increased from 15 to 116 from 1992 through 1997. Similarly, the number of Regulation A offerings qualified increased from 14 to 56 during this same time. These increases followed SEC’s adoption of rules that raised the ceiling for Regulation A offerings from $1.5 to $5 million as well as allowing Regulation A offerors to “test the waters” by soliciting investor interest in the security before incurring preparation costs for the offering statement. However, since 1997, the number of initial Regulation A offerings filed decreased significantly—from 116 in 1997 to 19 in 2011. The number of qualified offerings also dropped dramatically after 1998, decreasing from 57 in 1998 to 1 in 2011. SEC has not evaluated the causes of changes in the use of Regulation A. Securities attorneys with whom we met stated that the decrease in filings after 1997 could be attributed to a number of different factors, including the increased attractiveness of Regulation D. The National Securities Markets Improvement Act of 1996 preempted state registration requirements for certain other categories of securities offerings (including Rule 506 of Regulation D)—potentially making these other options more attractive to businesses. Initial qualified Regulation A offerings have varied in size and purpose and represented a wide range of business lines. Specifically, from 2002 through 2011, the maximum offering amounts for the 82 qualified Regulation A offerings ranged from $100,000 to $5 million. Over one-third of these offerings were for $5 million. According to SEC data, the businesses intended to use the proceeds for purpose such as capitalization, debt repayment, research and development, and marketing and advertising. During this period, different types of businesses filed offerings that qualified for exemption under Regulation A—for example, a software database service company, industrial design company, senior assisted living facility, and financial services company. These businesses were either corporations or limited liability corporations that were located throughout the United States. In addition, about 24 percent of the qualified Regulation A offerings were associated with start-up businesses. Businesses have used Regulation D exemptions and registered initial public offerings to a greater extent than Regulation A in recent years. We summarize the trends for these types of offerings and provide comparison with Regulation A qualified filings in table 1. Regulation D: According to SEC data, there were over 15,500 initial Regulation D filings for up to $5 million in fiscal years 2010 and 2011. In comparison, there were 8 qualified initial Regulation A offerings during this period. According to a recent report prepared for SEC, the median Regulation D offering was $1 million from January 2009 through March 2011 and the overwhelming majority of Regulation D issuers have been issuing securities under Rule 506. Registered Public Offerings: Businesses may decide to sell their securities through a registered public offering rather than seeking a Regulation A or another type of exemption—meaning that they must complete the registration process under the Securities Act. Data show that businesses more frequently opted to conduct a registered public offering than seek a Regulation A exemption. From fiscal years 2008 through 2011, the number of initial registered public offerings ranged from 536 to 195 each year, while the number of qualified initial Regulation A offerings statements ranged from 1 to 8. SEC staff said that some businesses may switch to a registered public offering instead of completing the Regulation A filing process. SEC’s process for reviewing filings for exemption through Regulation A includes multiple steps. First, Corporation Finance staff review the offering statement, which includes financial statements that have been prepared according to generally accepted accounting principles. More specifically, staff review filings to determine whether disclosures appear to be consistent with SEC rules and applicable accounting standards. Staff can then comment on the offering statement. That is, they may note deficiencies with the offering documents or ask for clarifications. According to SEC staff, deficiencies could include inadequate disclosure or incomplete financial statements. The goal of SEC staff is to provide comments on Regulation A filings within 27 calendar days of the filing date. Businesses are then given the opportunity to provide written responses and, if appropriate, amend their filing based on SEC’s comments. Depending on the nature of the issue, SEC’s concern, and the response from the business, agency staff may issue additional comments following their review of the response. This comment and response process continues until all SEC comments are resolved, at which time SEC qualifies the filing. The time period for SEC to complete its review process can be lengthy depending on the quality and completeness of the offering statement, the extent of SEC’s comments on the offering statement, and the business’s response. According to SEC data, from 2002 through 2011 it took an average of 228 days for 82 offering statements to complete the review process, starting from the date the Regulation A exemption was filed through the date SEC qualified the filing. SEC staff told us that the length of the review process depends largely on the quality of the filing initially and how quickly and thoroughly the business responds to their comments. Because of the amount of time it can take to complete SEC’s review process, an issuer whom we interviewed said that they concurrently filed their Regulation A offerings with SEC and the appropriate state(s). A business can opt not to continue seeking exemption through Regulation A at any point during SEC’s review process. SEC may declare an offering statement to be “abandoned” when the business fails to amend the offering statement for a lengthy time period and fails to respond to an abandonment notice. A filing may be “withdrawn” if the business informs SEC that it no longer wants to proceed and requests the offering statement be withdrawn and the SEC consents to the withdrawal. Between 1992 and May 2012, 214 of the 1,006 Regulation A filings made with SEC were abandoned or withdrawn. As discussed earlier, SEC staff stated that they have received anecdotal information that some businesses abandon or withdraw from the Regulation A filing process to raise capital through different means, such as the issuance of registered public offering. Although states employ a limited number of methods for registering securities offerings, specific requirements and processes vary. States generally use one of two methods for registering Regulation A securities—registration by qualification or registration by coordination. Registration by qualification is similar to a securities registration with SEC under the Securities Act. Specifically, the issuer submits required documents to the state securities agency, and the offering is subject to approval by that agency under the state’s standards. Registration by coordination is available to issuers that have registered their offerings with SEC.copies of their SEC registration statement and any amendments with the state agency for review. A registration by coordination usually becomes effective at the state level at the time it becomes effective at the federal level. Although the content of the filing and the procedure by which it becomes effective is streamlined in this process, it is still subject to state administrator review. Under this method, issuers file While all states conduct disclosure reviews of Regulation A securities offerings, most states also conduct merit reviews. Disclosure reviews follow the federal approach, requiring only full disclosure of all material information in offering statements. A merit review is an analysis of the offering using substantive standards (for example, the disparity in the price paid by promoters for their shares and the price paid by public investors). If an offering is considered unfair in certain respects, a state securities administrator will issue comments on the substance of the offering, and, as in SEC’s review process, the business has an opportunity to respond to the state’s comments. According to NASAA, if the business does not adequately address the state’s concerns, the state securities administrator may refuse to declare the registration statement effective in that state. Merit reviews have varying degrees of stringency, with some states applying stricter standards than others. For example, according to one of the state securities administrators with whom we met, the state’s blue sky laws require businesses that seek to offer securities to have a consistent record of earnings for the preceding 3 fiscal years. Other states may not have the same requirements for records of earnings. According to a state securities administrator official from a merit state, that state may require proceeds from investors to be placed in escrow until a certain level of proceeds is reached. For example, where an offering provides that a certain level of securities must be sold before proceeds are released to an issuer, the state requires the issuer to place all proceeds from investors in that state in an escrow account with a depository in that state until the level is reached. The funds cannot be released without authorization from the state agency. Although state registration processes can improve investor confidence, they can be costly and time-consuming for businesses seeking to raise capital according to issuers and securities attorneys with whom we met. Recognizing these potential costs, NASAA has developed and encouraged states’ use of methods to make registration of securities, including Regulation A offerings, more streamlined for multistate offerings. For example, 44 states allow businesses to use a standard form (called the SCOR form) to register their security offering. The SCOR form was adopted by NASAA in 1996 and is designed to simplify and reduce the costs to businesses of registering their securities. SCOR offers a simplified question-and-answer registration format and becomes the main disclosure document for securities offerings at the state level. Businesses that are exempt from federal registration under Regulation A can use the SCOR form in those states that accept it. As another means of streamlining the state registration process, some states participate in coordinated review programs—also known as regional reviews. A regional review expedites multistate registration, thereby potentially saving issuers time and money. Regional reviews are available in the New England, Mid-Atlantic, Midwest, Southwestern, Southeastern, and Western regions. Each state participating in the program agrees to apply uniform standards regarding such matters as the time frame for issuing comments and the type of comments to be issued in reviewing registration applications. According to NASAA, approximately 37 states participate in regional reviews. The efficacy of the efforts to streamline the state registration process is unknown. According to several of the state securities administrators whom we interviewed, they have not participated in regional reviews or used SCOR forms for Regulation A filings because there have been so few Regulation A filings in their state. Similarly, a researcher and securities attorneys with whom we met noted that some of these methods, like SCOR, have not been widely used because of the low number of Regulation A filings in recent years. According to officials from NASAA, changes to the states’ registration processes and requirements are likely needed to coincide with the new exemption for larger offerings under the JOBS Act. NASAA staff stated that they recognize that issuers may want to conduct nationwide offerings under the larger federal exemption, which increases the need for uniform state-level registration requirements for such larger offerings. In particular, the increased ceiling amount could encourage smaller community banks as well as those businesses that do not want to limit themselves to accredited investors or investors in a single state to pursue a Regulation A filing. Officials from organizations that work to develop capital intensive businesses agreed that in order for small businesses to use the Regulation A exemption, the process to register in multiple states needed to be more streamlined and entail minimal cost and greater efficiency. NASAA plans to work with the states to promote a more uniform state-level registration process for larger offerings. In addition, NASAA plans to coordinate with SEC on new disclosure forms for larger offerings—with the goal of developing a disclosure form that can be used at the federal and state level. According to NASAA officials, the time frame for making these and other changes is unknown, as the states must wait for SEC to issue certain rules under the JOBS Act. According to the stakeholders with whom we met, multiple factors may have influenced small businesses’ decision to use Regulation A. These factors included the type of investors businesses sought to attract, the process of filing the offering with SEC, state securities laws, and the cost- effectiveness of Regulation A relative to other SEC exemptions. Views vary on whether use of Regulation A will increase, with some stakeholders stating that interest will increase as a result of the $50 million ceiling, and others stating that the requirement for issuers to register the securities at the state level will continue to deter small businesses from using the exemption. Multiple factors appear to have influenced whether small businesses used Regulation A to raise equity capital, according to recent issuers and other stakeholders with whom we met. Regulation A has been attractive to small businesses because, among other things, they can sell the securities to nonaccredited investors. However, other factors, including SEC’s process for qualifying Regulation A filings, the requirement for Regulation A issuers to comply with blue sky laws, and the benefits associated with Regulation D have played a role in limiting the use of Regulation A to date. Small businesses that wanted nonaccredited investors to purchase their securities have opted to use Regulation A, according to recent issuers of Regulation A securities as well as other stakeholders with whom we met. One issuer stated that working with investors that supported its social mission was important, and that these investors were not necessarily accredited. Another issuer stated that the business wanted to sell its securities to specific investors with whom it had existing relationships— which also were not necessarily accredited. In both cases, the issuers explained that had they used a different SEC exemption to raise capital, such as Regulation D, they would not have been able to sell their securities to their desired investors. Representatives of one issuer also noted that they wanted to offer their securities to the public, and Regulation A enabled them to do so. This company offered its securities on the internet. Securities attorneys who have experience in assisting small businesses raise equity capital similarly stated that Regulation A has been attractive to businesses that desired to make their securities available to members of the businesses’ local community. The process of filing a Regulation A offering with SEC, and working with SEC to qualify the filing, can be time-consuming and costly, according to several stakeholders with whom we met. For example, several stakeholders, (including a recent Regulation A issuer, attorneys who worked with recent issuers, and a small business advocate) described the process as detailed and time consuming; two of these stakeholders described the process as akin to filing a registered public offering. Other stakeholders noted that the process of filing a Regulation A offering is considered in the industry to be a “mini-registration.” Stakeholders also noted that because the process of receiving and addressing comments from SEC could entail multiple rounds that involved attorneys and accountants, it could be costly to the small businesses involved. Two of the Regulation A issuers with whom we met stated that SEC required them to address comments related to their financial statements, and that such comments required the issuers to work with their accountants to clarify accounting-related information, which was costly. According to SEC, its process of qualifying Regulation A offerings is designed to protect investors. SEC staff stated that in some cases the businesses that were seeking exemption through Regulation A did not fully address SEC’s comments and requests for clarification, which resulted in additional comment letters as well as informal communication. Identifying and addressing the securities registration requirements of individual states is both costly and time-consuming for small businesses, according to research, an advocate for small businesses, and securities attorneys with whom we met. For example, one academic who has researched and written extensively about blue sky laws believes that they impose significant costs on small businesses and impair capital formation. According to this researcher, the costs to issuers of addressing blue sky laws have been a significant factor in the historic underuse of Regulation A by small businesses. An advocate for small businesses as well as securities attorneys with whom we met agreed with this assessment. An organization that advocates for small businesses noted that small businesses have limited resources; thus, the legal expenses associated with researching and complying with state securities laws can be a significant burden. Securities attorneys who have experience in assisting small businesses with obtaining the Regulation A exemption noted that their legal fees were relatively high due to the need to research individual state’s blue sky laws. For example, one attorney who works with start-up technology firms stated that his fees associated with Regulation A were high because of the need to research state laws, prepare offering documents for individual states, and address comments both from SEC and some states. Some states’ securities registration requirements deterred small businesses from registering in those states. For example, a representative of one of the Regulation A issuers with whom we met stated that the issuer was deterred from registering in a specific state because of the state’s requirement for issuers to have a consistent record of earnings for the preceding three fiscal years. He stated that because the business was relatively new and had not yet become profitable— particularly during the recent financial crisis—it could not meet this requirement. According to the securities administrator for this state, the state’s securities laws are intended to help ensure fair, just, and equitable offerings for investors, but other means exist to meet the state’s requirements. As another example, one issuer opted to withdraw its application from a state that provided extensive comments on the business’s offering. According to an official from this state’s securities administrator, small businesses withdraw from the process of registering with the state, likely to avoid having to address the state’s comments. Merit review states are viewed by some stakeholders as presenting greater challenges for small businesses that want to register Regulation A securities. As previously discussed, states assess the fairness of offerings in merit reviews and require businesses to address their comments before securities can be registered. For example, we met with securities attorneys who had experience obtaining Regulation A exemptions for small businesses. Some of the attorneys stated that they advised their clients to avoid registering in merit states. The legal counsel for one recent Regulation A issuer noted that after researching merit review states and contacting the securities administrator for one of these states, it became evident that the review processes in such states would be time-consuming and burdensome to address. The counsel advised, and the issuer agreed, to avoid attempting to register in any merit states. As noted earlier, according to NASAA officials, most states perform merit reviews. Issuers with whom we met stated that they registered in 3 to 11 states. Another SEC exemption—Rule 506 of Regulation D—historically has been preferable to Regulation A because of its time and cost benefits and lack of offering ceiling, according to an organization that advocates on behalf of small businesses and securities attorneys with experience in working with small businesses to raise equity capital. For example, one small business advocate stated that a small business has little reason to use Regulation A, particularly if it can use Rule 506 of Regulation D, which preempts blue sky laws. That is, a business that uses Rule 506 of Regulation D can raise equity capital without having to register the security in individual states, saving the business both time and money. Securities attorneys with whom we met agreed that Rule 506 of Regulation D is a preferable method of raising capital for small businesses because it is more cost-effective. As an example, one attorney noted that technology firms have been more inclined to use Rule 506 of Regulation D over Regulation A because the legal costs were lower, and such offerings could be made more quickly. For the technology industry, there are risks associated with time; thus, these firms want to obtain capital quickly. SEC staff stated that for Regulation D, businesses are required to notify SEC of the offerings, and that SEC does not generally provide comments on the notifications. Securities attorneys, staff from the offices of some state securities administrators, and other stakeholders with whom we met noted that Regulation D in general is preferable to Regulation A because the process of filing the required information with SEC is quicker and less burdensome. According to stakeholders whom we interviewed, Rule 506 of Regulation D also has been viewed as preferable to Regulation A because it did not have a maximum offer ceiling. Staff from some of the state securities administrator’s offices with whom we met stated that use of Regulation A’s had been low because the maximum offering amount was too small, and Regulation A was not as cost-effective as other financing mechanisms. Some securities attorneys with whom we met similarly described the Regulation A ceiling as too low, and stated that Rule 506 of Regulation D was very attractive in comparison. Securities attorneys also noted that the legal costs associated with Regulation A offerings were greater than those associated with Regulation D offerings. In addition, we previously reported that one of the reasons given for the limited use of Regulation A was that it was rare for an issuer to attract an underwriter for an offering under $5 million.securities attorneys with whom we met agreed that offerings of $5 million or less were viewed unfavorably by underwriters because they were too small in size to be profitable. The number of small business that seek exemption through Regulation A may increase as a result of the JOBS Act’s requirement for SEC to increase the maximum offering amount to $50 million, according to staff from some state securities administrators’ offices, a small business advocate, and securities attorneys whom we interviewed. A small business advocate with whom we met stated the higher ceiling increase could attract those businesses for which the $5 million ceiling was too low. Moreover, this advocate noted that some small businesses may want to enter the securities market but are not yet prepared to register an offering with SEC; thus, Regulation A would be a good way for them to enter the market. The higher ceiling also could increase underwriters’ interest in Regulation A, according to some stakeholders we interviewed. While investment banks are not interested in $5 million offerings, they are more likely to be interested in offerings that are closer to $50 million, according to some stakeholders. Under the JOBS Act, future Regulation A offerings generally remain subject to state blue sky laws, which may deter future use by small businesses. As previously discussed, addressing and complying with securities registration requirements of states can be costly and time- consuming, according to several stakeholders with whom we met. Recent Regulation A issuers, a small businesses advocate, and securities attorneys we interviewed stated that researching individual state laws and registering with multiple states significantly increased the legal and accounting costs associated with Regulation A offerings. As a result, even with the increased attractiveness of the $50 million ceiling, blue sky requirements may still dampen small business’ interest in Regulation A. However, some stakeholders also noted that with the increased ceiling, a Regulation A offering’s transaction costs (attorney fees and accounting costs) will represent a smaller proportion of the overall offering costs. In addition, Rule 506 of Regulation D may continue to be preferable to Regulation A, according to securities attorneys, staff from some of the state securities administrator’s offices, and another stakeholder whom we interviewed. Most notably, businesses that use Rule 506 of Regulation D do not to have to have the offering qualified by SEC or register in individual states, and can raise unlimited amounts of capital. Furthermore, the JOBS Act contains provisions that will allow issuers to make general solicitations and advertise offerings made under Rule 506 exclusively to accredited investors, which may further add to the appeal of Regulation D offerings. We provided a draft of this report to SEC and NASAA for their review and comment. Both provided technical comments, which we incorporated as appropriate. NASAA also provided written comments, which are reprinted in appendix I. In its letter, NASAA concurred with our finding that multiple factors have affected use of Regulation A, and suggested that the primary reason for its limited use is the “mini-public offering” process that businesses must complete. Stakeholders with whom we met did not consistently cite any single factor as the primary reason for the limited use of Regulation A. As noted in the report, NASAA stated that it will be working to develop model state registration requirements for the larger Regulation A offerings allowed under the JOBS Act, and NASAA suggested that further changes to federal securities laws, particularly Regulation A, should be withheld until states implement a new system to address the JOBS Act's changes. In considering any changes, NASAA stressed the importance of balancing the needs of investors with the needs of businesses seeking to raise capital. We are sending copies of this report to the Chairman of the Securities and Exchange Commission, the appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8678 or clowersa@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the individual named above Andrew Pauline, Assistant Director, Elizabeth Jimenez, Wati Kadzai, Marc Molino, Lisa Moore, Barbara Roesmann, and Henry Wray made major contributions to this report.
Businesses seeking to use public offerings of securities to raise capital must comply with federal and state securities laws. Businesses must register offerings with SEC unless they qualify for an exemption. Regulation A exempts a securities offering that does not exceed $5 million from SEC registration if certain requirements are met. However, businesses still must file an offering statement that includes an offering circular and financial statements with SEC, and SEC staff review filings for consistency with applicable rules and accounting standards. In addition, Regulation A does not exempt offerings from states’ registration requirements, which are also intended to protect investors. Concerned about the decline in the number of public offerings, the JOBS Act requires SEC to amend Regulation A (or to adopt a new regulation) to raise the threshold for use of that registration exemption from $5 million to $50 million, and requires GAO to study the impact of state securities laws on Regulation A offerings. This report examines (1) trends in Regulation A filings, (2) how states register Regulation A filings, and, (3) factors affecting the number of Regulation A filings and how the number of filings may change in the future. GAO analyzed SEC data related to financial regulatory filings, reviewed published research, and interviewed academics, SEC staff, state securities regulators, and small businesses. SEC and NASAA provided technical comments on a draft copy of this report, which GAO incorporated as appropriate. In its letter, NASAA concurred with our finding that multiple factors have influenced the use of Regulation A. The number of Regulation A offerings filed and qualified (that is, cleared) by the Securities and Exchange Commission (SEC) has declined significantly after peaking in fiscal years 1997 and 1998, respectively. In particular, offerings filed since 1997 decreased from 116 in 1997 to 19 in 2011. Similarly, the number of qualified offerings dropped from 57 in 1998 to 1 in 2011. Securities attorneys GAO interviewed suggested that the decrease in filings after 1997 could be attributed to a number of factors, including the increased attractiveness of Regulation D. The National Securities Markets Improvement Act of 1996 preempted state registration requirements for other categories of securities including certain Regulation D offerings, which are also exempt from SEC registration. In contrast, Regulation A offerings are generally subject to state securities laws and must go through a federal filing and review process. In recent years, businesses have used Regulation D and registered public offerings to a greater extent than Regulation A. States’ methods for registering and reviewing securities vary. One method used by states is “registration by qualification,” which is similar to registering securities with SEC, as issuers are required to submit certain documents to the responsible state securities agency for review and approval. All states conduct disclosure reviews of the Regulation A offerings, meaning that they ensure that all material information is disclosed in the offering. According to the North American Securities Administrators Association (NASAA) officials, most states additionally conduct a merit review—an analysis of the fairness of the offering to investors—although some states use stricter standards in their merit reviews than others. NASAA officials have encouraged states to take steps to streamline their requirements and make them more uniform, including adopting a standard form for registering securities. NASAA plans to work with states to determine what changes in their registration methods will be needed in light of the Jumpstart Our Business Startups Act (JOBS Act). Multiple factors appear to have influenced the use of Regulation A and views vary on whether raising the offering threshold will increase its use. The factors included the type of investors businesses sought to attract, the process of filing the offering with SEC, state securities laws, and the cost-effectiveness of Regulation A relative to other SEC exemptions. For example, identifying and addressing individual state’s securities registration requirements can be both costly and time-consuming for small businesses, according to research, an organization that advocates for small businesses, and securities attorneys that GAO interviewed. Additionally, another SEC exemption is viewed by securities attorneys that GAO met with as more cost-effective for small businesses. For example, through certain Regulation D filings small businesses can raise equity capital without registering securities in individual states, as long as other requirements are met. State securities administrators, a small business advocate, and securities attorneys with whom GAO met had mixed views on whether the higher maximum offering amount ($50 million) under the JOBS Act would lead to increased use of Regulation A. For example, some thought that the higher threshold could encourage greater use of Regulation A, while others told us that many of the factors that have deterred its use in the past likely will continue to make other options more attractive.
Great Smoky Mountains National Park encompasses 800 square miles in North Carolina and Tennessee. Designated a national park in 1934, it is 95 percent forested and is renowned for the diversity of its plant and animal resources and the beauty of its ancient mountains. Elevations in the park range from 800 feet to 6,643 feet. Areas adjacent to the park are growing in population and economic activity. For example, between 1990 and 2000, while the U.S. population grew 13 percent, the population of North Carolina and Tennessee grew 21 percent and 17 percent, respectively, and the population grew 18 percent in Buncombe County, the most populous county in western North Carolina. Recreational visits to the park increased from 8.2 million in 1990 to 10.2 million in 2000, and non-recreational visits to the Park increased from 9.4 million to 10.9 million during the same period. The burning of coal, gasoline, and other fossil fuels—by electric utilities, motor vehicles, manufacturing facilities, and other sources—generates sulfur dioxide and nitrogen oxide gases. When emitted into the air, these gases, and the substances, into which they can be transformed, may be transported hundreds of miles away. In the atmosphere, these gases may be transformed into tiny particles or may react with other chemicals. Visibility is impaired when light encounters these tiny airborne particles and is absorbed or scattered before reaching the observer. Humidity magnifies the problem because the particles may attract water and grow in size, thereby scattering more light. In addition to reducing visibility, these tiny particles, which can be inhaled deeply into the lungs, have been consistently associated in epidemiological studies with hospital admissions and premature deaths. Ozone is not emitted; it forms when nitrogen oxides react with volatile organic compounds in the presence of sunlight. Repeated exposure to ozone may permanently damage lungs or trigger symptoms, such as chest pains or coughing. It may also interfere with plants’ ability to produce and store food, making them more susceptible to pathogens, pests, and other pollutants. (By contrast, in the upper atmosphere, ozone forms a protective layer that shields the earth from harmful ultraviolet rays.) The Clean Air Act requires the Environmental Protection Agency (EPA), in concert with state and local air pollution agencies, to regulate mobile and stationary source emissions of certain air pollutants. In addition, for pollutants it determines to be harmful to human health, EPA is responsible for setting standards for concentrations of those pollutants in the air that people breathe. It has designated six such principal pollutants, including sulfur dioxide, nitrogen dioxide, particulate matter, and ozone. Finally, certain provisions of the act specifically address visibility impairments. Although the 1977 amendments to the Clean Air Act established tougher requirements for new power plants and certain other facilities, existing facilities—including those operated by TVA and many other electric utilities—were exempted from these requirements, so long as they did not make physical changes to the plants that resulted in increased emissions. EPA alleged in 1999 that TVA and several privately owned utilities had violated the Clean Air Act provisions by making physical changes to their units that had resulted in emissions increases. EPA’s lawsuit against TVA is pending. Despite the progress over the past 20 years in reducing the emissions of most principal pollutants and improving air quality, ozone concentrations in many counties in the United States exceed national standards, and visibility in many otherwise pristine areas remains a problem. Over the same time period, the prevalence of asthma—the most common chronic disease of children in the United States and other developed countries— has increased by more than half. Visibility in Great Smoky Mountains National Park remained poor throughout the 1990s. On the worst days (those ranked in the bottom one- fifth of all days in terms of visibility—usually hot and humid summer days), visibility ranged between 12 and 15 miles from 1989 through 1999, according to the latest available data from the National Park Service. On average days (the middle one-fifth of all days), visibility stayed at about 27 miles during the decade and, on the best days (those ranked in the top one-fifth of all days), it stayed at about 51 miles. Reduced visibility is primarily caused by airborne particles that either scatter or absorb light. In the eastern states generally, and in the park specifically, during the summer, these particles are predominantly fine sulfate particles formed from sulfur dioxide gas, a product of burning coal and other fossil fuels. The electric utility sector accounted for 67 percent of the nation’s sulfur dioxide emissions in 1999 (latest available data); the transportation sector, 7 percent; and other sources, the remaining 26 percent. Because sulfur dioxide gas and the sulfate particles, into which it can be transformed, can travel hundreds of miles on wind currents, the particles that degrade visibility in the park may originate from emissions released over a large area. According to a recent National Park Service analysis of the air masses that reached the park on low-visibility days (that is, days with high levels of particulates), the majority started, or spent considerable time, over the industrial Midwest, which allowed them to accumulate substantial quantities of sulfur dioxide. Air masses arrived from the west on a lesser, but still significant, portion of the low-visibility days, while few air masses arrived from the east and south on such days. In 1994, the year before the provisions of the Clean Air Act Amendments of 1990 limiting sulfur dioxide emissions took effect, the nation’s electric utilities emitted 14.9 million tons of sulfur dioxide. In 1999, the level fell to 12.7 million tons, and it is projected to decrease to just under 9 million tons in 2010. The number of days when ozone levels in the park exceeded a health-based threshold set by EPA, called “exceedances,” generally increased during the 1990s, according to data from EPA and the National Park Service. However, the number fell sharply in 2000. It is believed that the decline is related to the cooler summer temperatures in 2000. Because sunlight is necessary to the formation of ozone and heat accelerates chemical reactions, ozone levels tend to peak in the summer months. The two principal precursors of ozone—nitrogen oxides and volatile organic compounds—have diverse sources. Motor vehicles and other transportation sources produced 55.5 percent of the nation’s nitrogen oxide emissions in 1999 (latest available data); electric utilities produced 22.5 percent; and other sources produced the remaining 22 percent. Volatile organic compounds include isoprene, which trees produce, and various hydrocarbons, such as those emitted when gasoline evaporates or is burned incompletely. In North Carolina, Tennessee, and other southeastern states, trees and other natural sources produce relatively high levels of isoprene. Thus, there is an abundance of naturally occurring volatile organic compounds in these areas to react with nitrogen oxides to form ozone. According to a recent National Park Service analysis, on high-ozone days, most of the air masses reaching the park arrived from the north and northwest—generally after passing through the industrial Midwest. Fewer air masses arrived from the west and south and very few arrived from the east. The Clean Air Act Amendments of 1990 are expected to reduce national nitrogen oxide emissions by 2 million tons a year by 2010, relative to the level without these provisions. Other EPA policy initiatives are intended to make further cuts in emissions. Throughout the 1990s, death rates from two respiratory illnesses— (1) chronic lung disease and (2) pneumonia/influenza—in North Carolina and Tennessee were consistently higher than the comparable national rates. Moreover, death rates from these illnesses in the North Carolina and Tennessee counties adjacent to the park were generally higher than the comparable rates for these states as a whole throughout the 1990s, even though these counties had substantially lower death rates from all causes. (These rates are age-adjusted to allow comparability between states and over time; however they were not adjusted for other influences, such as rates of smoking and socioeconomic levels.) In the past 50 years, studies conducted in the United States and abroad have consistently shown that people who breathe polluted air are more likely to suffer adverse health effects. These effects may be reflected in increases in breathing problems, hospital admissions, and premature mortality from lung and heart conditions. People over the age of 65 and those with pre-existing chronic heart and lung conditions, such as heart disease and asthma, are more likely than others to experience adverse health effects from exposure to air pollutants. However, scientists do not clearly understand why and how air pollution leads to adverse health effects, and many other factors, notably cigarette smoking, also affect the development and severity of lung and heart diseases. TVA’s decision on how to meet its customers’ demands for electricity is constrained by federal environmental laws and regulations, as well as internal policies. In recent years, TVA relied on coal to generate 62 percent of its electricity and on nuclear power, hydropower, and other sources, in that order, to generate the remaining 38 percent. In 2000, TVA’s peak capacity to generate electricity was 29.5 gigawatts (a gigawatt is a million kilowatts). TVA estimates that its peak demand for electricity will grow about 1.7 percent each year between 2001 and 2010. To meet this level, TVA will have to plan for an additional one-half a gigawatt increase each year—the equivalent of building an average-sized power plant every year. TVA can do so by purchasing power, constructing new plants, and providing incentives to its customers to reduce their peak demand. Although TVA’s coal consumption increased from 34 million tons in 1990 to 40 million tons in 1999, its emissions of sulfur dioxide (which can be transformed into visibility-reducing sulfate particles) declined 30 percent. To achieve this reduction, TVA, among other actions, switched to coals with lower sulfur contents and installed equipment called “scrubbers” to remove sulfur dioxide from exhaust gases. TVA estimates that such emissions will decline 36 percent between 1999 and 2005. TVA’s emissions of nitrogen oxide (an ozone precursor) were relatively stable during the 1990s; however, it estimates that its emissions of nitrogen oxides—during the warm-weather months when ozone levels peak—will decline 70 percent between 1999 and 2005. To achieve this reduction, TVA is, among other actions, spending about $1 billion to install “selective catalytic reduction” devices—which remove nitrogen oxides from the exhaust gases—at some of its coal units. We provided a draft of this report for review and comment to the Departments of Agriculture, Health and Human Services, and the Interior; EPA; and TVA. We received letters from the Department of the Interior, EPA, and TVA, which are reprinted, along with our comments, as appendixes I through III, respectively. Those three agencies, as well as the Department of Agriculture and the Centers for Disease Control and Prevention (part of the Department of Health and Human Services) provided technical and clarifying comments, which we incorporated where appropriate. We did not receive comments from the National Institutes of Health (another part of the Department of Health and Human Services). The agencies generally agreed with the facts and analysis we presented. The Department of the Interior said that our narrative description of visibility conditions is accurate. However, it also said that the methodology it uses to calculate visibility conditions is currently under review and that some of the data we presented may be affected by this review. EPA also told us about this review. Similarly, EPA provided additional data on changes in the amount of airborne sulfate particles, an indicator of visibility. EPA said that a 12 percent decrease in sulfate particles was measured at a site about 100 miles northeast of Great Smoky Mountains National Park between the periods of 1990 through 1992 and 1998 through 2000. EPA did not provide comparable data for Great Smoky Mountains National Park. EPA noted the overall appropriateness of how we presented the current understanding of the health risks from the pollutants that we examined. EPA and TVA commented that our location-specific analysis of death rates should be viewed with caution because of the many factors that influence the development and severity of respiratory illness. We agree. To analyze trends in visibility and ozone, we interviewed officials from, and reviewed studies and other documents prepared by, the Department of the Interior’s National Park Service, the Department of Agriculture’s Forest Service, EPA, and TVA, as well as recent scientific literature. We also interviewed representatives of, and reviewed studies and other documents prepared by, state officials in North Carolina and Tennessee and the Southern Appalachian Mountains Initiative—a voluntary partnership of federal and state agencies, industry, academia, environmental groups, and interested public participants. To analyze trends in respiratory illnesses, we reviewed recent scientific literature and contacted the Centers for Disease Control and Prevention, a unit within the Department of Health and Human Services, and EPA health researchers to ascertain the availability of data on health outcomes associated with exposure to air pollution. We focused our analysis on mortality data because they were available for the nation as a whole and for the counties in North Carolina and Tennessee—the two states that border the park. We also focused on death from all causes and on death from pneumonia/influenza and chronic lung disease, two sets of illnesses that many studies associate with exposure to air pollution. We obtained national data from the Department’s National Center for Health Statistics and state and county data from the North Carolina and Tennessee health agencies. To describe trends in areas near the park, we divided counties in each state into regions used by the state agencies responsible for air quality monitoring and selected the region in each state that borders the park. To analyze the data for deaths from all causes and from selected respiratory illnesses, we used the same statistical procedures and significance tests that the National Center for Health Statistics uses to develop national death rates. To calculate death rates for comparison between the two states and the nation we used the 1940 standard population, the current practice of the National Center for Health Statistics. For the comparison between county clusters and the states, we used the 2000 standard population, the current practice of the North Carolina State Center for Health Statistics. Although we did not independently verify the data obtained from the federal agencies and other sources, we used the same emissions and mortality data that federal and state agencies and other analysts generally use. We performed our work from October 2000 through May 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairman and Ranking Member, Committee on Appropriations, United States Senate; the Chairman and Ranking Minority Member, Committee on Appropriations, House of Representatives; the Chairman and Ranking Member, Committee on Governmental Affairs, United States Senate; the Chairman and Ranking Minority Member, Committee on Government Reform, House of Representatives; Representative Zach Wamp and other interested Members of Congress; the Honorable Ann M. Veneman, Secretary of Agriculture; the Honorable Tommy G. Thompson, Secretary of Health and Human Services; the Honorable Gale A. Norton, Secretary of the Interior; the Honorable Christine Todd Whitman, Administrator, EPA; the Honorable Skila Harris and the Honorable Glenn L. McCullough, Jr., Members of the Board, TVA; and other interested parties. We will also make copies available to others upon request. Questions about this report should be directed to me or David Marwick at (202) 512-3841. Key contributors to this report were Gene M. Barnes, Richard A. Frankel, and Cheryl A. Williams. As directed by House Report 106-1033 and as discussed with Chairman Charles H. Taylor, we focused on four issues: visibility, which is important to people who live near the park, visitors from other areas who travel to enjoy the park’s vistas, and others; ozone, which can harm people, animals, and plants; respiratory illnesses, a manifestation of the harm from ozone and other causes; and Tennessee Valley Authority’s (TVA) plans to reduce its emissions of sulfur dioxide and nitrogen oxides. TVA. We also collected information from North Carolina and Tennessee state agencies, non-profit groups, and others. Reduced visibility is caused by small particles in the air. In the east generally, and in the park specifically, most of these particles are sulfates, which are formed in the air from sulfur dioxide gas. In analyzing visibility, we focused on the worst days, typically summer days, because it is then that reduced visibility is the most detrimental to enjoyment of the park. We obtained data for visibility on the worst days in Great Smoky Mountains National Park and two others parks selected for comparison. The other parks are Shenandoah in Virginia and Acadia in Maine. On the worst-visibility days (those ranked in the lowest one-fifth of all days for each year), visibility in the park remained poor between 1989 and 1999, ranging between 12 and 15 miles, according to data from the National Park Service. Visibility remained essentially unchanged on average days and on the best-visibility days. Between 1989 and 1999, visibility on the worst days in Shenandoah generally stayed at about 12 miles, while it improved in Acadia to 22 miles. Sulfate particles, in turn, are formed primarily from sulfur dioxide gas. For the nation, the largest source of sulfur dioxide gas emissions in 1999 was the electric utility industry, which accounted for about 67 percent of the total, according to EPA’s estimates. The transportation sector accounted for 7 percent and other sources accounted for the remaining 26 percent. The 1990 amendments to the Clean Air Act require that sulfur dioxide emissions by electric utilities be reduced between 1995 and 2010. Electric utility emissions of sulfur dioxide declined substantially during the 1990s—from 16.2 million tons in 1989 to 12.7 million tons in 1999. Under these amendments, these emissions are scheduled to decline to just under 9 million tons a year in 2010. Atmospheric studies have found that sulfate produced from power plant emissions can travel many hundreds of miles. Therefore, the visibility- reducing sulfates that reach the park can come from sources near and far. Dozens of power plants are located in the eastern states. National Park Service analysts recently traced the paths of the air masses that delivered sulfate particles to the park for the 3 days before reaching the park; they did this for both high- and low-visibility days, between May and September in the years 1995 through 1999. They found that on high- visibility days (that is, days with low levels of particulates), the air masses arrived from nearly all directions of the compass, although very few air masses arrived from the northeast. The air masses had often traveled many hundreds of miles, with some of them starting their 3-day journey as far away as Canada, the Gulf of Mexico, or the Atlantic Ocean. Because they traveled so quickly, they spent little time over any particular area, including the industrial Midwest and other areas with high levels of sulfur dioxide emissions. Conversely, they found that on low-visibility days (that is, days with high levels of particulates), the air masses generally had traveled shorter distances, with most of them starting their 3-day journey just a few hundred miles away from the park and often following more roundabout trajectories, which kept them over particular areas for longer times. The predominant majority of the air masses started over the industrial Midwest, or spent considerable time there, which allowed them to accumulate substantial quantities of sulfur dioxide. A lesser, but still significant, portion of the air masses on these low-visibility days arrived from the region west of the park, while few air masses arrived from the east and south on such days. Research continues on the sources of the air pollution that affects the park. The authors of a 1990 study told us that they are updating their study and hope to publish their results next year. Also, the Southern Appalachian Mountains Initiative—a voluntary partnership of federal and state agencies, industry, academia, environmental groups, and interested public participants—is analyzing the issue and also hopes to publish its results next year. Ground-level ozone is not emitted. It is produced from nitrogen oxides and volatile organic compounds in the presence of sunlight. Heat accelerates the chemical processes through which ground-level ozone is formed. Pursuant to the Clean Air Act, EPA establishes public health standards for various air pollutants. For ozone, EPA has established a threshold of 0.08 parts per million, measured over an 8-hour period. An “exceedance” is recorded on any day when a monitor measures ozone levels that exceed this threshold. (The state of North Carolina has adopted the federal standard.) We analyzed the number of exceedances during 1990 through 2000 for the Great Smoky Mountains and Shenandoah national parks and through 1999 for Acadia National Park. (Data for Acadia for 2000 were not available at the time of our review.). In the early and mid-1990s, the number of exceedances rose moderately in the park and was generally stable for the other two locations. In 1997-99, the number of exceedances was much higher for the Great Smoky Mountains and Shenandoah but remained level for Acadia. In 2000, the number of exceedances for the Great Smoky Mountains and Shenandoah fell sharply to about the 1996 level. It is believed that this decline in the number of exceedances is related to the cooler summer temperatures in 2000. In 1999, the transportation sector emitted 55.5 percent of nitrogen oxides nationwide, according to EPA’s estimates. This includes cars and trucks (called on-road vehicles), as well as farm equipment and other engines (called non-road sources). Electric utilities accounted for 22.5 percent and other sources, the remaining 22 percent. The transportation sector’s emissions of nitrogen oxides increased from 12.2 million tons in 1989 to 14.1 million tons in 1999—a rise of 16 percent. This increase is less than the 28-percent increase during that same period in the number of miles traveled by cars, trucks, and other vehicles, according to the Federal Highway Administration. Within the transportation sector, cars, trucks, and other on-road vehicles emitted 8.6 million tons in 1999, according to EPA. Farm equipment, lawn and garden equipment, and other non-road sources emitted the remaining 5.5 million tons. National Park Service analysts recently traced the paths of the air masses that arrived at the park on the lowest and highest ozone days from 1995 through 1999; specifically, they traced the masses for the 3 days before they reached the park. On the low-ozone days (those ranked among the 15 percent with the lowest concentrations during the 5-year period), the air masses arrived from all directions, with a slight preponderance from the south and relatively few traveling over the northeast (New England to Pennsylvania). A substantial proportion of the air masses traveled long distances within 3 days; many traveled from the Atlantic Ocean and Gulf of Mexico, and a few traveled from the west (the Plains states) and the north (Canada). On the high-ozone days (every day when an exceedance of the 8-hour standard was recorded in the park), the air masses traveled substantially shorter distances within 3 days. Virtually none of the paths extended back to the Atlantic Ocean or Gulf of Mexico. Even on land, the air masses traveled substantially shorter distances within 3 days than on low-ozone days, but they still traveled hundreds of miles, and a much smaller proportion arrived from south of the park. Thus, on high-ozone days, most air masses arrived in the park from the north and northwest—generally the industrial Midwest—with fewer air masses arriving from the west and south and very few arriving from the east. Volatile organic compounds, which are another ozone precursor, originate from: human activities, such as various hydrocarbons that are emitted when gasoline and other fuels evaporate or are burned incompletely, and natural sources, such as isoprene from trees. In 1997, the amount of naturally emitted volatile organic compounds (28 million tons) nationwide was greater than the amount released through human activities (19 million tons), according to EPA. The Southeast, because of its forests, is particularly rich in the naturally produced compounds. Of the 15 states with the largest total amount of these compounds, the 8 states with the greatest concentrations—pounds per square mile of land area—are all located in the Southeast. Thus, there is an abundance of naturally occurring volatile organic compounds in these areas to react with whatever nitrogen oxides are produced to form ozone. Because the formation of ozone in such a geographic area is constrained by the available amount of nitrogen oxides (called NOx), such an area is described as being “NOx limited.” Increasing evidence associates air particles and ozone with respiratory problems. Small particles seem to be particularly harmful to human health. Air quality is one of many factors that influence the development and severity of disease. Research continues to try to better understand the causal link. Over the past 50 years, epidemiological and other studies both here and abroad have consistently found that exposure to fine air particles and ozone is associated with respiratory and other health problems. Specifically, health effects—such as hospital admissions and premature mortality—increase as concentrations of ozone or airborne particles increase. These effects are seen most strongly in people with existing heart and lung conditions. Small particles seem to be particularly harmful to human health, in part because they can be inhaled deeply into the lungs and may carry other pollutants on their surfaces. A range of factors influences the development and severity of lung and heart conditions, including exposure to allergens or pollutants, genetics, and behavior. For example, cigarette smoking is a primary cause of chronic lung disease, other than asthma, and lung cancer. Epidemiological studies alone are limited in their ability to prove causality. However, they can suggest relationships for further scientific research, as in the case of cigarette smoking and heart disease. The exact causal link between exposure to air pollution and adverse health effects is not completely understood and research continues to try to learn the mechanisms by which pollutants harm human health and which kinds of airborne particles are the most harmful. To analyze trends in health, we used data on mortality—deaths—because comparable state-specific and national data on other health outcomes, such as hospitalizations, were not available. We adjusted all death rates for age to control for differences in the age distribution of the different populations. Age adjustment allows comparisons of rates over time and between groups; however, the rates are not adjusted for other influences, such as rates of smoking and socioeconomic levels. We made three sets of comparisons for death rates from all causes and for deaths from chronic lung disease from 1991 through 1998: 1. We compared the nation as a whole to the states of North Carolina and Tennessee. 2. We compared the entire state of North Carolina with the 19 counties that constitute western North Carolina, as categorized by the state’s Department of Environment and Natural Resources. 3. We compared the entire state of Tennessee with the 16 counties in eastern Tennessee, as categorized by the state’s Department of Environment and Conservation. In the first set of comparisons, we found that from 1991 through 1998 overall death rates were consistently higher in North Carolina than in the United States as a whole, and higher in Tennessee than in North Carolina. overall deaths rates declined by 8 percent for the United States as a whole and by 5 percent for North Carolina, and essentially stayed the same for Tennessee. Two sets of respiratory illnesses—chronic lung disease (a term applied to several related conditions including asthma, chronic bronchitis, and emphysema) and pneumonia/influenza—have consistently been associated with exposure to ozone and airborne particles. In the United States, these two sets of respiratory illnesses have been the fourth and sixth leading causes of death since 1991. Each year they together account for about 9 percent of all deaths. For chronic lung disease, we found that between 1991 and 1998 the rates for North Carolina were usually higher than the rate for the United States as a whole, and the rates for Tennessee were always higher than the rates for North Carolina; during the time period, these rates generally increased, which is counter to the general decline in death rates; and the increases in rates varied—6 percent for the United States but about 19 percent for North Carolina and 20 percent for Tennessee. The trends for pneumonia/influenza followed similar patterns. Death rates in North Carolina and Tennessee were higher than the national rates, and the rates in Tennessee were usually higher than the rates in North Carolina. Moreover, death rates increased in each state by about 10 percent, but decreased slightly for the nation. To analyze trends in health problems within each state, we compared age-adjusted death rates for the states of North Carolina and Tennessee with death rates for clusters of counties that border the park. In analyzing death rates for the entire state of North Carolina versus the 19 counties in western North Carolina, we found that overall death rates were consistently higher for the state than for western during the period, death rates dropped by 3 percent for the state and 2 percent for western North Carolina. Results for the entire state of Tennessee versus the 16 counties in eastern Tennessee were similar, except that death rates rose slightly from 1991 through 1998. We found that overall death rates were consistently higher for the state than for eastern during the period, death rates increased by 3 percent for the state and 4 percent for eastern Tennessee. To enhance the soundness of our analysis, we compared deaths occurring over 2-year periods, rather than single years. Finally, we focused on the two respiratory illnesses—chronic lung disease and pneumonia/influenza—in the states and county clusters. For chronic lung disease: there were increases in age-adjusted mortality during this period in both states—about 19 percent in North Carolina and about 20 percent in Tennessee, the rate per 100,000 deaths increased slightly faster in western North Carolina than in the state as whole, and the rate in eastern Tennessee increased more slowly than did the rate for the entire state. The death rates for pneumonia/influenza (not shown here) were lower than the rates for chronic lung disease but increased in both states about 10 percent. The rate in western North Carolina increased more slowly than did the state rate and the rate in eastern Tennessee increased somewhat more rapidly. Research is ongoing, in this country and abroad, to address the scientific uncertainties of the causal links between exposure to air pollution and harm to human health. TVA’s choices in generating power are constrained by laws, regulations, and internal policies. For example, The Clean Air Act limits certain emissions from coal-fired power plants. The TVA Act provides that the generation of power from hydroelectric units is a lower priority than navigation and flood control. An internal TVA policy limits the time period when TVA can draw down the lakes (reservoirs) that it manages for flood control and in the process generate hydropower. The longer the time frame, the more TVA must and can do to comply with laws, regulations, and policies. Because of the multiple purposes in the TVA Act and elsewhere, and because of other laws, regulations, and policies, TVA faces a difficult balancing act between its operating priorities and the often conflicting or competing user needs. To generate electricity TVA relies primarily on coal. Between 1990 and 1999 TVA’s coal consumption increased 18 percent. In the most recent 5- year period from 1996 through2000, coal accounted for 62 percent, nuclear power accounted for 28 percent, hydroelectric power accounted for 9 percent, and other sources accounted for the remaining 1 percent. During that 5-year period, the amount of nuclear power increased from 35.4 to 46.9 million kilowatt-hours, and its share of the total increased from 24 percent in 1996 to 31 percent in 2000. In the same period, the amount of hydroelectric power fluctuated, largely because of changes in water levels; it generally declined from 16.1 to 8.8 million kilowatt-hours, and its share of the total declined from 11 percent in 1996 to 6 percent in 2000. TVA’s emissions of sulfur dioxide declined 30 percent from 1989-99. It estimates that, as a result of additional steps under way and planned for the next decade, its sulfur dioxide emissions will decline an additional 36 percent between 1999 and 2005. (The percentage reduction reflects TVA’s most recent estimate of 2005 emissions, which is not reflected in the figure.) Among the steps TVA has taken and plans to take to reduce these emissions are: burning coals with lower sulfur contents at 51 of its 59 units and installing equipment called “scrubbers” in two units each at the Cumberland, Paradise, and Widows Creek plants. Scrubbers can remove more than 90 percent of the sulfur dioxide from a plant’s emissions and are considered the best currently available technology for reducing such emissions. In looking at emissions of nitrogen oxides, we focused on the 5-month “ozone season” (May 1 through Sept. 30) when ozone levels tend to be relatively high. TVA’s emissions of nitrogen oxide in 1999 were about the same as in 1989. In 1998, TVA announced plans to invest nearly $1 billion in pollution- control equipment. It projects that its ozone-season nitrogen oxide emissions will decline about 68 percent between 1999 and 2005. (The percentage reduction reflects TVA’s most recent estimate of 2005 emissions, which is not reflected in the figure.) The planned equipment includes selective catalytic reduction devices for 25 of its 59 coal-fired units. These devices transform nitrogen oxide emissions into harmless nitrogen and water vapor.
Concerns have been growing about the air quality, visibility, and respiratory illnesses around the Great Smoky Mountains National Park, which straddles the border between North Carolina and Tennessee. This report analyzes recent trends in and contributing factors to (1) visibility impairments, (2) ground-level ozone, and (3) respiratory illnesses. This report also examines the Tennessee Valley Authority's (TVA) plans to reduce its emission of regulated pollutants from generating electricity. Visibility impairments and ozone are largely attributable to the following three types of emissions: sulfur dioxide, nitrogen oxides, and volatile organic compounds. The counties that border the park generally have slightly higher mortality rates from two types of respiratory illness. The three types of emissions interact in the atmosphere to form ozone gas and sulfate particles, which are linked to respiratory illnesses. In response to federal laws and other factors, TVA is making substantial environment-related investments and expects to reduce its annual emissions of sulfur dioxide by 40 percent and its "ozone-season"' emissions of nitrogen oxides by 70 percent between 1999 and 2005.
DOD has been reporting problems with its data accuracy for real and personal property since at least 1990. The Department has provided a number of reasons for the unreliable reporting, including property systems that maintain item accountability not being integrated with financial accounting systems. In fiscal year 1995, the DOD Comptroller concluded that this lack of integration adversely affected the accuracy of accounting systems data and financial reporting. The DOD Comptroller also stated that general ledger control over property, which is necessary to ensure that all financial transactions are recorded in the official accounting records, is lacking or inadequate. Accordingly, the DOD Comptroller selected DPAS to remedy these deficiencies and had implemented the system at over 150 sites as of June 1997. The DOD Comptroller designated DPAS as the property accounting system for all DOD real and personal property in order to bring DOD assets under proper accountability and financial control. DPAS is expected to provide on-line capability to support all functions that are associated with property accountability and equipment management, as well as financial control and reporting. In addition, DPAS is expected to produce the financial transactions necessary to record additions, reductions, or changes in the value of capital assets to the various general ledgers used in DOD. DPAS is also the subsidiary ledger containing all the detailed property information necessary to support the general ledger summary totals. DPAS was adapted from the Army Materiel Command’s Installation Equipment Management System by the Army’s Industrial Logistics Systems Center (ILSC) personnel. Under the oversight of the DOD Comptroller, the Financial Systems Activity (from the Defense Finance and Accounting Service’s (DFAS) Columbus, Ohio, operating location) and ILSC are responsible for maintaining and enhancing (1) the DPAS software, (2) all systems documentation, such as the user’s manual, functional description, and system specifications, and (3) processing equipment required to host DPAS. The DOD Comptroller is responsible for defining all accounting requirements, including any new accounting requirements. One of the organizations where DPAS was implemented is DISA, the DOD agency responsible for information technology. One of DISA’s organizations is the Defense Megacenter business area which consists of a headquarters and 16 Defense Megacenters that provide information processing services to DOD customers on a fee-for-service basis. DPAS has been implemented at 39 DISA sites overall, including the 16 Defense Megacenters as of June 1997. The DOD Comptroller’s office published an overall Implementation Handbook for DPAS. The current version at the time of our review was dated April 1996. A specific implementation plan is also developed for each implementing agency, of which DISA is one. DISA’s implementation plan for DPAS was dated February 6, 1996. Some of the items included in the plan were (1) an implementation schedule, (2) a description of each organization’s responsibilities, (3) equipment requirements, and (4) a description of training to be provided. Also, the plan stated the Director, DISA, is responsible for specifying interface requirements for each DISA DPAS location and working with the DOD Comptroller’s implementation team to develop the required interfaces. To determine whether DPAS meets federal accounting standards, we used relevant public laws, Office of Management and Budget (OMB) Circulars, Statements of Federal Financial Accounting Standards (SFFAS), Joint Financial Management Improvement Program (JFMIP) publications, and DOD’s Financial Management Regulation (FMR). We also used our Draft Federal Financial Management System (FFMS) review methodologysections on Fixed Assets, Funds Control, General Ledger, and Cost Accounting to evaluate the financial control functions of DPAS and how general PP&E information is shared or exchanged with other financial areas such as cost accounting. Financial control functions include ensuring that the system design allows proper recording of transactions for general PP&E in the general ledger. It also includes ensuring the system has been implemented with adequate internal controls to ensure data accuracy. To evaluate DPAS as designed and implemented, we obtained and reviewed the DPAS system documentation and reviewed the DPAS Implementation Handbook. In addition, we reviewed DPAS implementation at the Huntsville DISA Defense Megacenter. We selected Huntsville because a DISA official stated that this center had the fewest implementation problems. At the time we began audit work, DISA was one of the larger DOD agencies that had implemented DPAS at multiple sites. We visited DFAS-Pensacola, Florida, and the DISA Financial Management Liaison Office (FMLO) in Pensacola to review how the DISA Defense Megacenter’s DPAS financial transactions were processed and to better understand the processing logic for the interface between DPAS and the DISA general ledger. DFAS Pensacola provides finance and accounting services to some DOD activities. The FMLO serves as the liaison between DISA and DFAS on financial matters. Our scope did not include assessing technical design and software development issues, with the exception of DPAS’ integration with other functional areas such as procurement. Also, our review was limited to the financial control functions of DPAS and therefore did not include logistics functions. We did not evaluate, from either a cost-benefit or a technical standpoint, the DOD Comptroller’s selection of DPAS as a standard migration system, nor did we assess whether there are viable alternatives to DPAS. We reviewed documents and interviewed officials at the following locations: (1) DISA headquarters, Arlington, Virginia, (2) DISA’s western hemisphere office, Falls Church, Virginia, (3) DISA’s Defense Megacenter, Huntsville, Alabama, (4) Industrial Logistics Systems Center at Letterkenny Army Depot, Chambersburg, Pennsylvania, (5) DFAS-Pensacola, Florida, (6) DFAS-Columbus, Ohio, and (7) Information Technology Financial Management Directorate, Office of the Comptroller, Arlington, Virginia. We performed our work from August 1996 to August 1997 in accordance with generally accepted government audit standards. We requested written comments from the Secretary of Defense or his designee on a draft of this report. The Acting Under Secretary of Defense (Comptroller) provided us with written comments. These comments are evaluated in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. DPAS is designed to provide information to account for most general PP&E. This information is created based on information recorded in the DPAS property book. However, DPAS does not have the financial information to process certain minor types of general PP&E, such as foreclosed assets and the depletion of natural resources, in accordance with DOD policy and existing accounting requirements. Although DPAS, as DOD’s property system, should be able to record all required transactions, the omitted items do not affect a significant portion of DOD’s assets. For example, natural resources represent only 1.2 percent of DOD’s total general PP&E. Also, the DOD Comptroller has not yet provided guidance to ILSC on implementing federal accounting standards that become effective for periods beginning after September 30, 1997. In contrast to the omitted items referred to in the previous paragraph, implementation of these new standards may have a significant effect on DOD’s financial reporting. For example, accounting for deferred maintenance costs for assets such as buildings, facilities, and equipment is a new requirement under SFFAS No. 6 and therefore was not included in the original DPAS design. According to a DOD Comptroller official, DOD is currently updating its Financial Management Regulation to incorporate the new standards’ requirements. A DPAS project office official indicated that specific system changes to DPAS needed to meet the new standards cannot be identified until the DOD Financial Management Regulation is updated. The DPAS functional design can be modified to meet all current and pending property accounting requirements through changes that include the addition of data elements and financial transactions. DPAS as designed does not include the standard general ledger postings in its financial transactions. As a result, each site must determine the general ledger posting logic for DPAS financial transactions. The following are specific areas where DPAS should be expanded to meet these requirements. DPAS does not provide the capability to calculate the cost of a capital lease. Capital leases transfer substantially all the benefits and risks of ownership to the lessee. Agencies are required by federal accounting standards (SFFAS No. 5 currently in effect and SFFAS No. 6 effective October 1, 1997) to calculate the net present value of lease payments to determine the cost of capital leases. DPAS cannot track deferred maintenance costs. Deferred maintenance, as defined in SFFAS No. 6, paragraph 77, is maintenance that was not performed when it should have been or was scheduled to be and which, therefore, is put off or delayed for a future period. SFFAS No. 6 requires a line item on the statement of net cost with a note reference for deferred maintenance, if the amount is determined by management to be material. It also requires the activity to identify each major class of asset for which maintenance has been deferred and the method of measuring it. Also DOD’s draft Federal Accounting Standards and Requirements, dated February 24, 1997, which applies to Defense accounting systems, includes requirements to account for deferred maintenance. Further, DPAS does not allow the user to designate deferred maintenance as critical or noncritical. The standard allows the optional disclosure of deferred maintenance to be stratified between critical and noncritical amounts needed to return each major class of asset to its acceptable operating condition. If management elects to disclose critical and noncritical amounts, the disclosure shall include management’s definition of these categories. DPAS does not provide features to record, value and report foreclosed property, or record any increase or decrease in the value of these assets. SFFAS No. 3 (effective October 1, 1993) contains explicit guidance on recognizing, valuing, disposing, and disclosing each of these assets. Also DOD’s draft Federal Accounting Standards and Requirements, dated February 24, 1997, which applies to defense accounting systems, includes requirements to account for foreclosed property. DPAS does not have the capability to provide a transaction to the accounting system to record gain or loss amounts. For example, SFFAS No. 6 requires that the net realizable value of an asset be used to calculate a gain or loss upon disposal or exchange with a non-federal entity. DPAS does not provide the ability to record the total estimated environmental clean-up costs for an asset when it is placed in service, or upon discovery of the need for clean up, nor to periodically update these costs. Also, the capability is not provided to calculate the annual expense and accrued liability amounts. SFFAS No. 5 requires recognition of the liability for cleanup from federal operations resulting in hazardous waste. SFFAS No. 6 contains detailed guidance for accounting for clean-up costs and recognizing the annual expense and accrued liability amounts. DPAS does not provide features to deplete assets such as natural resources. While SFFAS No. 6 does not address natural resources, DOD’s FMR, Volume 4, requires DOD activities to use the depletion of natural resources account when management deems that depletion accounting is necessary. An objective of implementing DPAS DOD-wide is to ensure financial control and accurate reporting of general PP&E. Figure 1 illustrates how, in general, information must flow among property and related systems to ensure financial control over property. Although the DPAS design allows it to be implemented as shown in figure 1, which gives DOD the ability to have a fully automated property system that assures financial control and data integrity, DISA’s implementation of DPAS failed to achieve this objective. Financial control over property is established when detailed transactions maintained at one location are also maintained in a summary form in the financial records—referred to as the general ledger. For example, when it is determined that property is needed, the property book officer notifies both the supply/procurement officer and core accounting personnel. When the contract is issued, procurement personnel, in turn, notify both the property book officer to expect the item and core accounting, which includes the general ledger. When the item is received, receiving personnel notify both the property book officer and core accounting officials. This duality provides not just financial control, such as ensuring accurate recording of purchase price, but also operational control, such as recording the location and condition of the asset. Ensuring data accuracy in this process requires that the transaction be edited for the processing requirements of each system. In addition, if data reside in two systems, periodic reconciliations must be performed to ensure that the data in the two systems remain in balance. The only automated interface implemented for DPAS at DISA is between the property book and the core accounting system. There is no interface between DPAS and procurement, and interfaces between DPAS and the other functions, such as receiving, are manual. As such, achieving operational and financial control over assets is highly dependent on the accuracy of data that are manually processed or maintained, and on manual compensating controls, such as routine reconciliations. As shown in figure 2, the DISA procurement system—the Automated Contract Preparation System (ACPS)—did not send information to the property book, DPAS, at the same time it was sent to the core accounting system. According to DISA officials, the interface between ACPS and DPAS was not built because it would have required an extensive interface development effort. The absence of this interface, either automated or manual, means that DPAS is not used to record initial procurement activities. Therefore, DPAS data cannot be completely reconciled with DISA’s core accounting system data. We found over $100 million in differences between the values shown in the DPAS detailed property records and the summary- level records maintained in the DISA general ledger. Although the DISA Financial Management Liaison Office representative was aware of the differences and was preparing reconciliation procedures, he could not explain the reasons for the differences, nor had a reconciliation been attempted at the time of our review. As discussed below, one cause was DPAS transactions were incorrectly recorded in the DISA general ledger. As of May 31, 1997, DISA had submitted journal vouchers totaling over $118 million to correct the differences. We identified the following DPAS implementation problems at DISA. These contributed to the material difference discussed above and have the potential to erode both operational and financial control over property. DPAS transactions were incorrectly recorded in the DISA general ledger. The DISA general ledger contained items that were recorded incorrectly due to errors in the automated interface program which allows DPAS to communicate directly with DISA’s general ledger. This interface program contains processing rules for recording DPAS transactions based on the transaction function code and certain other fields in DPAS as increases or decreases in the appropriate DISA general ledger accounts. Because these processing rules are neither included in DPAS nor the DOD Comptroller’s DPAS Implementation Handbook, DISA developed general ledger processing rules for the interface program. However, errors in the processing rules resulted in increases being recorded as decreases and decreases as increases in the balance of assets held. For example, transfers-in of assets were recorded as decreases instead of increases. As a result of our work, DISA and DFAS officials began taking actions to correct the interface program and the account balances. Procedures were not adequate to ensure control of rejected transactions. In order to ensure control of rejected transactions, ideally, data should be edited for all systems at the original point of entry, and those which failed the edits should be placed in a suspense file. If all edits are not performed at the original point of entry, additional edits can be included in automated interface programs and a second suspense file created. DISA used edits in the interface programs, but failed to set up the needed suspense file. Therefore, DISA cannot ensure that DPAS transactions rejected in the automated interface program to the general ledger were corrected and recorded properly in the general ledger. Reconciliations were not performed. Internal controls were not in place to ensure that discrepancies were corrected promptly. DISA was not performing reconciliations between DPAS and its general ledger. For example, transactions had been posted incorrectly to the DISA general ledger as decreases instead of increases, as noted in our first example above, since processing began in September 1996. These posting errors went unresolved because reconciliations were not performed. The DPAS Implementation Handbook said that reconciliations should be performed but did not provide guidance on how to perform them nor state how often or who should perform them. DISA was developing reconciliation procedures but as of July 1997 they were not finalized. The DPAS financial transaction for equipment in transit was not used. DISA does not use the financial transaction for equipment in transit which DPAS provides. Equipment in transit information updates the general ledger to provide visibility over assets that are no longer under a site’s physical control but for which the site is still accountable. However, when DISA assets are moved from one site to another, instead of processing an equipment in transit transaction, the assets remain in the DISA general PP&E asset accounts until the receiving activity indicates receipt of the asset. If a physical inventory were taken, assets identified as lost might actually be in transit to another location. DOD FMR, Volume 4, requires that equipment in transit be recorded as such in an agency’s accounting system when transfer begins and that the equipment be removed from the account only when it is received and accepted by the gaining activity. The DPAS Implementation Handbook did not specifically require users to use certain DPAS financial transactions. Many of the problems experienced at DISA and resulting in inaccurate property data can be linked to several issues that affect DOD-wide implementation of DPAS. Specifically, DOD has not completed its strategic planning process for agencywide systems integration, which would include defining how the property function is accomplished and the responsibilities of all involved parties. Also, DOD’s implementation strategy for DPAS relies on the services and Defense agencies to determine where and when to implement the system, with no overall oversight to ensure that the DOD Comptroller’s stated goal of full implementation by the year 2000 is met. Finally, as illustrated by the problems we found at DISA, the DPAS Implementation Handbook lacks specific guidance on several important factors, such as reconciliations. Development of a concept of operations for DOD’s property function would help ensure that DPAS is able to achieve data accuracy and the financial control it was designed to produce, in both DOD’s current and future operating environments. As we stated in our June 1997 letter on DFAS’ draft Federal Accounting Standards and Requirements, the strategic planning process for systems should include a concept of operations that delineates how the property function is (or will be) accomplished and defines the roles, responsibilities, and relationships among the various DOD entities involved. Validating DPAS and the services’ and Defense agencies’ related property systems against the concept of operations would allow DOD to determine whether each system is appropriately interfaced, either manually or automated, with other systems to provide data accuracy and property accountability. In addition, for the concept of operations to be useful, it should encompass (1) all financial management functions related to property not just those under the control of the DOD Comptroller and (2) both current and future property operations to document how DOD is working today and obtain mutual agreement from all of the parties on how DOD will conduct its property operations in the future. Not preparing a concept of operations may result in development efforts in other business areas being incompatible with DPAS, the selected property system. For example, during the course of our audit we learned that Air Force officials have expressed concern as to whether DPAS will fit into their planned functionality. In the absence of an overall concept of operations that would lay out how the system is to be implemented to maintain data accuracy, each implementing site essentially is charged with developing its own concept of operations with no assurance of adequate controls or consistency among sites. When DPAS was selected as DOD’s standard property system, it was anticipated that it would be interfaced with a single standard system in each business area such as accounting, supply, and procurement. Under this scenario, a limited number of automated interfaces would need to be developed. However, due to the long-term nature of DOD’s standard systems development effort, DOD is currently using multiple systems in these areas and will continue to do so for the foreseeable future. For example, DOD has at least 76 procurement systems. DOD plans to replace 10 of these systems with the Standard Procurement System. However, the standard system will not be fully implemented until at least 2001. Currently, data accuracy at DOD can be maintained either through automated interfaces with numerous nonstandard systems or through manual procedures such as reconciliations between stand-alone systems. According to JFMIP systems standards, interfaces should be electronic unless the number of transactions is so small that it is not cost beneficial to automate the interface. In either case, reconciliations, including automated matching, between systems should be maintained to ensure accuracy of the data. In general, manual interfaces that rely on the physical keying and rekeying of data substantially increase the opportunity for error and create the need for manual compensating controls. Although a relatively small organization such as DISA could use manual procedures, if effectively implemented, to maintain data accuracy, such procedures would be too labor-intensive and inefficient on a DOD-wide basis. However, the DOD Comptroller’s implementation strategy relies on individual sites to specify property system interfaces and determine how data accuracy is to be maintained. Therefore, DOD has no assurance that automated interfaces and automated matching processes will be developed wherever cost-effective and in accordance with an overall strategy and that manual controls will be maintained where necessary. Although DOD has established a goal of achieving financial control over its assets by the year 2000, the DOD Comptroller does not have a schedule to implement DPAS consistently at all sites by that date. DOD has not identified its universe of DPAS users—those sites in the services and Defense agencies that must use DPAS to ensure control over property is maintained—DOD-wide. Rather, the DOD Comptroller has left it up to each military service and Defense agency to identify where and when they want to implement DPAS without providing time frames for identification of sites or ensuring that the correct sites are identified. Thus, DOD could not tell us how many sites remain to be implemented and associated time frames for meeting its year 2000 goal. A complete implementation schedule would also help ensure that the DPAS program office is able to allocate its resources to adequately support the implementation schedule. In addition, the guidance DOD has developed for implementing DPAS is inadequate, as illustrated by the problems we found at DISA. Specifically, the DPAS Implementation Handbook does not provide instruction for accurately posting DPAS transactions to the general ledger. This may result in inconsistent and inaccurate reporting of DOD property. Further, the Handbook does not specify that all transactions generated by DPAS, which are applicable to the agency, should be used. Inaccurate and inconsistent financial reporting may result. Also, the Handbook states that reconciliations should be performed but does not specify how or by whom. Failure to perform reconciliations, as we found at DISA, may result in inaccurate data going undetected. As designed, DPAS produces transactions to provide financial control and to account for most general PP&E, but needs to be enhanced to meet all applicable federal accounting standards. Also, issues related to the need for planning and implementation guidance must be addressed. As evidenced by DISA’s implementation problems, DOD has not defined how the property function is to be performed or provided implementation guidance to ensure internal controls are in place. DOD implementation strategy relies on individual sites to determine whether and what interfaces to develop—automated or manual—and establish necessary controls. Given the size and complexity of DOD, this approach is unlikely to result in an efficient and cost-effective implementation of DPAS by the year 2000. To ensure that DPAS meets the DOD Comptroller’s stated goal of achieving financial control and accountability over general PP&E by the year 2000, we recommend that the Deputy Secretary of Defense take the following steps. Develop, in consultation with the appropriate Assistant Secretaries, a concept of operations that (1) lays out how the property function is to be accomplished, including identification of needed manual and automated interfaces and related controls, and (2) defines for both the current and future operating environments the roles, responsibilities, and relationships among the various DOD entities involved, such as the Comptroller’s office, DFAS, DOD component agencies, and the military services. Develop a detailed DPAS implementation plan that includes a schedule that identifies at what sites and when the system will be implemented. Revise the DPAS Implementation Handbook to (1) specify the complete financial transactions for posting DPAS data to the general ledger, (2) include specific guidance on how and when to perform reconciliations and who should be performing them, including automated matching of DPAS records to the general ledger, where appropriate, and (3) require that all financial transactions generated by DPAS, such as equipment in transit, be used. Expand DPAS functionality to ensure it includes transactions to meet all current and pending requirements related to property found in federal accounting standards and DOD financial management regulations. Transactions produced by DPAS for updating the general ledger should reflect the posting logic for both the debit and credit in accordance with the U.S. Government Standard General Ledger. In addition, to resolve the implementation problems specific to DISA, we recommend that the Director, DISA, (1) submit a request to the DPAS project office to include appropriate additional transaction edits required by DISA for general ledger processing, (2) correct the interface program, and (3) finalize procedures for reconciliation of DISA’s general ledger accounts for property to DPAS property records, including provisions to ensure timely reconciliations are accomplished and general ledger control is maintained over general PP&E. In written comments on a draft of this report, DOD’s Acting Under Secretary of Defense (Comptroller) stated that the Department generally agreed with the report findings and recommendations. The letter also stated that the Department will provide comments on each recommendation later. Although DOD generally agreed with the report findings and recommendations, DOD stated that it believes that it is erroneous to find deficiencies in DPAS’ ability to comply with requirements that have not been finalized by the Federal Accounting Standards Advisory Board (FASAB) and for which implementation instructions have not been issued by the Office of Management and Budget (OMB). All accounting requirements for general PP&E which are addressed in this report, as were developed by FASAB, were approved by GAO, OMB, and Treasury and issued by GAO and OMB in 1995. These requirements are currently in effect or will become effective October 1, 1997. OMB guidance in Bulletin No. 97-01, Form and Content of Agency Financial Statements, was issued October 16, 1996 for the preparation of financial statements for the fiscal year ending September 30, 1998. In commenting on our draft report, DOD asked that we clarify that the DISA posting problem did not involve a deficiency in the internal operations of DPAS. We believe that the implementation issue which arose at DISA could have been mitigated if the DPAS design included the standard general ledger posting logic. Because all DOD general ledgers do not currently use the U.S. Government Standard General Ledger, any crosswalks required to enter DPAS transactions in these nonstandard general ledgers should be in interface programs. In response to DOD’s comments, we clarified our recommendation to include that transactions produced by DPAS for updating the general ledger should reflect the posting logic for both the debit and credit in accordance with the U.S. Government Standard General Ledger. This report contains recommendations to you. Within 60 days of the date of this letter, we would appreciate receiving a written statement on actions taken to address these recommendations. We are sending copies of this letter to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the House Committee on National Security, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the Senate and House Committees on Appropriations. We are also sending copies to the Director of the Office of Management and Budget; the Acting Under Secretary of Defense (Comptroller); the Acting Director, Defense Finance and Accounting Service; and the Director, Defense Information System Agency. Copies will be made available to others upon request. Please contact me at (202) 512-9095 if you have any questions concerning this report. Major contributors to this report are listed in appendix II. J. Jose Watkins, Senior Evaluator Susan J. Schildkret, Evaluator V. Malvern Saavedra, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO determined whether the Defense Property Accountability System (DPAS): (1) was designed to meet functional accounting requirements for general property, plant, and equipment (PP&E); and (2) was implemented at the Defense Information System Agency's (DISA) Defense Megacenters in a manner that ensures it meets functional accounting requirements for general PP&E. GAO noted that: (1) as functionally designed, DPAS can provide financial control and generate information to account for most general PP&E; however, DPAS cannot yet meet requirements that become effective for accounting periods beginning after September 30, 1997; (2) DPAS does not contain the information needed to meet new federal accounting standards for deferred maintenance and environmental clean-up costs; (3) the DPAS design does not meet several current Department of Defense (DOD) accounting requirements for certain minor types of PP&E; for example, DPAS does not have the information to meet the requirements for recording depletion of natural resources, which represent 1.2 percent of DOD's total general PP&E; (4) DPAS needs to be modified with the addition of data elements and financial transactions to meet new standards and requirements as well as the current ones not yet covered; (5) implementation of DPAS at DISA did not ensure financial control and accurate reporting of general PP&E; (6) DPAS was not correctly interfaced with the accounting system due to errors in the interface program used to translate DPAS data to data understandable to DISA's general ledger; (7) this caused transactions to be recorded incorrectly in the general ledger, resulting in a material difference of over $118 million in property values between DPAS detailed records and the general ledger summary records; (8) the problem with the interface program could have been mitigated if transactions using the standard general ledger accounts were created in DPAS; (9) in addition, compensating controls, such as routine reconciliations between the two systems, were in place; (10) many of the problems with the accuracy of property data experienced at DISA can be linked to several issues that affect DOD-wide implementation of DPAS; (11) DOD, as part of its DPAS strategic planning process, has not defined the roles, responsibilities, and relationships among the various DOD entities involved, interfaces, and related controls; (12) this part of the strategic planning has not developed a detailed DPAS implementation schedule that identifies at what sites and when the system will be implemented; and (13) DOD has left it up to each military service to determine where, when, and how DPAS is to be implemented without providing adequate implementation guidance or ensuring that the implementation schedule includes all sites, making it unlikely that DPAS will be implemented across DOD by the Comptroller's target date of 2000.
The U.S. Customs Service has a diverse mission spanning a large geographic area. Customs’ responsibilities include (1) collecting revenue from imports and enforcing Customs and other U.S. laws and regulations, (2) preventing the smuggling of drugs into the United States, and (3) overseeing export compliance and money-laundering issues. At the close of fiscal year 2000, Customs had a permanent workforce of about 20,000 employees. These employees carry out Customs’ mission at its headquarters, 20 Customs Management Centers (CMC), 20 Special Agent- in-Charge (SAIC) offices, 301 U.S. ports of entry, 5 Strategic Trade Centers, and over 25 international offices. Customs processed over 23 million import entries, with a value of $1.17 trillion; 140 million conveyances; and 489 million land, sea, and air passengers in fiscal year 2000. Every manager and supervisor is responsible for conducting a self- inspection of the activities, such as entry processing, that they oversee, using uniform self-inspection worksheets. The worksheets are used to evaluate the key internal control points in a particular program or process, assess mission/program accomplishments, and better define priorities and identify areas needing improvement. Customs’ entities are to conduct self- inspections every 6 months, which are referred to as self-inspection cycles. At the time of our review, four self-inspection cycles had been completed, with the fifth cycle due to start in July 2001. Self-inspection results, including problems identified and potential corrective actions, are to be funneled up the chain of command. For example, worksheet results addressing port activities are to be certified by approving officials at the ports of entry. The officials are to send results to the CMC, which in turn sends the results to the cognizant assistant commissioner at Customs headquarters. The assistant commissioners are to report SIP results to MID after ensuring that self-inspection worksheets have been completed, are accurate, and have been analyzed; key issues have been identified; corrective actions have been determined; and timeframes for completing corrective actions have been established. MID is responsible for monitoring and directing SIP worksheet development and managing the program. In addition, MID is to conduct independent verification and validation inspections of the completed self- inspection worksheets to ensure that they are correct and accurate. MID’s inspections are more in-depth for certain areas than the local self- inspection reviews in order to verify that all problems in those areas have been found. MID performs its verification and validation inspections with a staff of 65 professionals, located at headquarters and at seven field offices around the country. MID also supplements its staff by detailing managers and supervisors from other entities to help conduct the inspections. In addition to managing SIP, MID’s staff devotes about 30 percent of their time to auditing funds involved in undercover operations. Figure 1 shows that MID has inspected or is scheduled to inspect, more than half of Customs’ entities by June 30, 2001. In particular, MID has inspected or is scheduled to inspect 19 of the 20 CMCs, 17 of the 20 SAICs, and several of its subordinate entities, including ports of entry, Resident Agent-in-Charge (RAIC) offices, and Resident Agent (RA) offices. To identify (1) SIP’s usefulness as a mechanism for oversight and accountability and (2) problems relating to SIP implementation, we interviewed officials from several Customs’ organizations, including MID, which oversees SIP and is within the Office of Internal Affairs. In addition to MID, during our review, we interviewed officials from the Office of Field Operations (OFO), which oversees CMCs and ports of entry, and the Office of Investigations (OI), which oversees SAICs, RAICs, and RAs, because these are geographically dispersed components with the responsibility for completing a large number of self-inspection worksheets. Also, we judgmentally selected seven ports of entry and one SAIC office from these components for field visits and accompanied MID on inspections of four of the entities. We also visited the CMCs affiliated with the two San Diego ports of entry and Los Angeles International Airport (LAX). At the ports of entry and SAIC office, we interviewed supervisors and managers responsible for implementing SIP at the local level. We also reviewed relevant agency documents, including nationwide results of several self-inspection cycles, self-inspection results at the field entities visited, and the results of MID verification and validation inspections. We reviewed MID verification and validation inspection reports for 144 Customs’ entities. We summarized the results of these reports using spreadsheets, detailing the number of areas MID reviewed, the number of documentation errors MID identified, the number of self-inspections with questions answered incorrectly, and MID’s assessments and observations of SIP implementation. Because we could not glean such specific information from reports for 17 of these entities, these reports were not included in our analysis. MID’s results from inspections of 127 entities and over 2,000 self-inspection worksheets were used in our analysis. Because the MID reports did not follow a consistent format and it was often difficult to extract the appropriate information, we used the reports only to validate overall assessments given by MID. Because the entities inspected were judgmentally selected by MID, the findings from these reports and our analysis of these reports may not be representative of the entire Customs Service. To identify improvements and refinements that could enhance the value of the program, we worked closely with directors, managers, and supervisors at selected Customs sites. We also interviewed MID, OFO, and OI officials to obtain their views and plans for improving and refining the program. In addition, we reviewed MID documentation concerning changes and refinements to the program. We performed our work between June 2000 and April 2001 in accordance with generally accepted government auditing standards. We provided the Customs Service with a draft of this report. Customs’ written comments are discussed in our agency comments section and are included as appendix I to this report. Our visits to eight Customs’ entities and review of 127 MID verification and validation inspection reports found SIP to be a useful mechanism for managers and supervisors to identify and correct problems at the local level and to obtain more control over activities they oversee. During self- inspections, the entities uncovered areas of vulnerability and identified numerous areas needing improvement in their local operations. The MID also identified areas needing improvement through their verification and validation inspections. The managers and supervisors we talked with said that SIP had made a positive contribution to oversight and accountability and to improving their operations. SIP has been a useful mechanism for identifying and correcting problems. The Customs’ entities that we visited had uncovered numerous areas of vulnerability in their local operations as a result of self-inspections. For example, all eight entities had identified weak controls over seized property and were implementing corrective actions. Corrective actions included developing evidence and sign-in logs for property rooms and vaults and improving processes for transferring seized property to secure vaults in a timely manner. In addition, at one port of entry we visited, Customs officials discovered problems in the imprest fund as a result of a self-inspection. According to the port director, the self-inspection raised questions about a cashier’s handling of the fund, which led to a determination that the cashier had embezzled $1,000 from the fund. According to an OFO report, CMCs and ports of entry identified approximately 3,000 items needing improvement with corresponding corrective actions nationwide during the second SIP cycle. The OFO SIP Program Coordinator tracks the status of the corrective actions for issues that occur at many ports. Financial management, trade programs, passenger processing (including personal searches), and fines, penalties, and forfeitures (including seized property) were the areas identified most frequently as needing improvement. In addition, all the entities we visited had identified numerous areas needing improvement and had implemented a number of corrective actions. For example, one port of entry identified 37 areas needing improvement out of 54 they inspected during the second SIP cycle. Of the 37 areas, corrective actions for 31 had been completed at the time of our visit; the rest were in process. During the third SIP cycle, the port identified 27 different areas needing improvement out of 64 inspected. Of the 27 areas, corrective actions for 23 had been completed by the time of our visit; the rest were in process. In addition to problems uncovered through local level self-inspections, MID verification and validation inspections identified additional areas needing improvement. For example, at one port of entry, MID inspectors found late filing of travel vouchers, security checks not being conducted, and personal search facilities having privacy and safety shortcomings. In response, MID made numerous recommendations to port management for corrective actions. MID officials explained that the recommendations were based on problems identified during their inspection that the port should have found during its self-inspection or MID found in areas it reviews but are not addressed by the self-inspection worksheets. MID officials further explained that while on-site at a port of entry or other entity, they conduct more in-depth reviews that go beyond the scope of the worksheet questions. These in-depth reviews have been targeted toward higher-risk areas, such as airport security programs, and may lead to worksheet revisions where warranted. MID also conducts follow-up activities to ensure that its recommendations have been completed. MID generally sends a follow-up memorandum to the applicable entity about 6 months after the inspection to determine the status of the corrective actions taken in response to the recommendations. The entity then provides MID with a letter stating the status of the corrective action(s) and provides supporting documentation. After receiving this letter, MID may close out the recommendation, require more documentation, or visit the entity to obtain more documentation. For example, during the second and third SIP cycles, MID made a total of 65 recommendations to improve imprest fund management. At the time of our review, 17 of the 65 recommendations had been implemented and were closed out. Forty-one recommendations were pending; MID sent out follow-up memorandums to determine the status of the pending recommendations but had not received responses at the time of our review. MID officials told us that it was too soon to send out follow-up memorandums on the remaining seven recommendations. Managers and supervisors at every port of entry and SAIC office we visited told us that SIP was useful as a mechanism for oversight and accountability and that it had contributed to improving their operations. They said that they had uncovered and corrected problems they would not have discovered had they not performed the self-inspections. For example, one port director said that SIP highlighted problem areas in the port’s operations and brought them to the attention of managers and supervisors more effectively than had Customs’ prior management inspection program. Under one component of the prior program, teams of officers from areas such as passenger processing would inspect different areas such as fines, penalties, and forfeitures. Because these teams did not work in the areas being inspected, they generally had limited knowledge and expertise of the activities they were inspecting. With this limited knowledge, they were often unable to verify the information they were given or investigate any problems in the activities further. In contrast, under SIP, supervisors inspect their own activities and their immediate superiors review the results. According to this port director, SIP helps supervisors maintain accountability for their area of responsibility because it requires them to “back-up” the results of their self-inspection with supporting documentation. The next level of local management is to certify that the results are correct and, should MID inspect their area of responsibility, managers and supervisors will be asked to explain how they determined worksheet answers. Many supervisors told us that these different levels of review helped give the process integrity. Managers and supervisors also said that self-inspections assisted them in uncovering and correcting problems that they would have otherwise not known about, and that SIP helps to keep them informed and current on Customs’ policies and directives. For example, at one entity, purchase cardholders were not locking up their purchase cards as required, but were carrying them in their wallets. The cardholders reported that they were not aware of the requirement to keep the cards secured, but because of SIP, they now lock up the cards. Notwithstanding the positive aspects of the program, several problems have surfaced during SIP implementation. During its verification and validation inspections, MID found insufficient supporting documentation for some worksheet answers and inaccurate reporting of some self- inspection results. Largely due to these findings, MID was also concerned that some officials may not have conducted adequate reviews of completed self-inspection worksheets before certifying them as accurate. We also found insufficient documentation to support worksheet answers and questions answered incorrectly to some extent at the entities that we visited. In addition to problems with worksheet completion and accuracy, many of the managers and supervisors we interviewed believed the program to be burdensome and time-consuming. MID inspections and our analysis of MID inspection reports identified problems concerning insufficient documentation to support worksheet answers and incorrectly answered questions. According to the results of inspections conducted since January 2000, MID reported finding insufficient supporting documentation on 23 percent of the worksheets reviewed and questions answered incorrectly on 16 percent of the worksheets. Results from our review of MID inspection reports were consistent with MID’s assessment. Insufficient documentation and inaccurate worksheet results indicated to MID inspectors that approving officials may not have sufficiently reviewed completed worksheets in order to certify their accuracy. MID inspectors explained that by signing worksheet certifications, approving officials, such as port directors, are indicating that they reviewed the self-inspection and that they agree that the results are accurate. However, MID inspectors believed that if support for answers was not indicated on the worksheets, it was not clear how the officials could have determined that the results were accurate. In effect, MID inspectors believed that some approving officials might be “rubber-stamping” self-inspection results. According to MID inspection reports, problems at some entities with insufficient supporting documentation and inaccurate worksheet answers resulted, in part, from a lack of detailed guidance and instructions on how to complete and review worksheets and determine appropriate sample sizes. Many managers and supervisors that we interviewed agreed with MID’s assessment and believed that more guidance and more specific instructions were necessary to answer some worksheet questions. Specifically, they mentioned having difficulty determining the proper supporting documentation for answers to some questions, the appropriate universe from which to draw a sample, and the appropriate sample size. We also found worksheets lacking supporting documentation and with incorrect answers, to some extent, at all the entities that we visited. At three of the eight Customs entities that we visited, the majority of the worksheets we analyzed either lacked supporting documentation or had incorrectly answered questions. We found instances of managers and supervisors having difficulty determining (1) proper support for worksheet answers, (2) appropriate sampling universe and timeframes, and (3) appropriate sample sizes. For example, at one entity we visited with MID inspectors, a purchase cardholder answered “yes” to a worksheet question asking whether the monthly purchase card statements were reconciled within 10 days after their receipt. While it is a requirement to provide supporting documentation for worksheet answers, the purchase cardholder did not indicate that any records, reports, or other documents were reviewed to support the answer. During their inspection, MID accessed a report that showed some purchase card statements had not been reconciled within the prescribed timeframe. According to the MID inspector, the report should have been reviewed when answering the question and a copy included with the self-inspection worksheet as supporting documentation. The purchase cardholder explained that he was unaware of this report and, lacking documentation, had answered the question based on what he remembered. As a result, MID inspectors concluded that incorrect results had been funneled up the chain of command. Purchase cardholders at two of the other three entities we visited with MID also said that they were unaware of this reconciliation report. We also found supervisors having difficulty determining the appropriate sampling universe and timeframes. For example, at one entity we visited, a MID inspector found that a supervisor completing the collections and deposits worksheet had sampled supporting documents from only 1 month in the 6-month reporting period. The MID inspector explained that this was not representative of the activity in the review period. The supervisor explained that he had received no training or guidance on how to conduct the self-inspection and only had instructions on the worksheet for guidance. We reviewed the self-inspection worksheet and found that the instructions read “Review at least 10 documents randomly selected from the day’s deposit and collection documents.” The instructions do not explain that the sample of documents should be from every month or describe how the documents should be representative of the entire reporting period. At another port of entry, we found that supervisors had sampled collection and deposit activity from only 1 day in the 6-month reporting period, which appeared to be consistent with the limited instructions referenced above. In addition, we found examples of difficulties in determining appropriate sample sizes. For example, questions on one worksheet ask about controls over narcotic training aids for canines. The worksheet did not contain specific instructions or guidance about how to support answers to these questions. At one entity we visited, the supervisor who completed the worksheet used control documents from one canine enforcement officer’s (CEO) file as supporting documentation for the worksheet answers. We interviewed the supervisor and found that there were 16 CEOs at the port of entry in possession of narcotic training aids, although only one file had been reviewed for the self-inspection. The supervisor said that no guidance or instructions had been provided on how many files needed to be reviewed to support the worksheet answers, so the supervisor randomly picked one file to answer the worksheet questions. At another port of entry, the supervisor who completed the canine narcotics training aid worksheet indicated that without guidance on how to determine the sample size, the local CMC instructed that a 5-percent sample should be taken. The supervisor drew a 5-percent sample of CEOs in possession of narcotic training aids, which resulted in only 2 out of 53 files being included in the self-inspection. At six of the eight entities we visited, more than half of the 54 managers and supervisors we interviewed said that SIP can be paperwork-intensive, burdensome, and time-consuming. In particular, some self-inspection worksheets have more questions than others, from only 1 to 15 questions, and some worksheets require extensive research to determine and support worksheet answers. For example, a supervisory import specialist we interviewed explained that to answer one worksheet question, a data query of over 120,000 records had to be retrieved before a sample could be taken. According to the supervisor, other questions on this worksheet required similar efforts, taking several days to gather the information. Although other worksheets may not require such extensive research, neither MID nor any of the entities we visited tracked the amount of time required to complete various self-inspections. Consequently, we were unable to determine the actual time burden of the program or to what extent completing self-inspection worksheets may impact primary job duties. About three-quarters of the managers and supervisors who told us that SIP was time-consuming and burdensome said that too much time was being spent on low-risk activities, such as internal and external relations, and reporting requirements for low-risk activities were too frequent. At several of the entities we visited, managers and supervisors said that Customs needed to prioritize the activities covered by the worksheets and focus on high-risk activities such as collections and deposits and seized narcotics. One port director said that based on the port’s activities, the areas of collections and deposits of cash/checks and seized currency should be the two highest priorities under the port’s self-inspection. These are risky areas that could be vulnerable to abuse, making them good candidates for continuous scrutiny under SIP. The port director also believed that other areas are not as important and should receive less scrutiny during the SIP cycles. As discussed later in this report, MID is taking action to address these concerns. Based on feedback MID received from our review, its own inspections, and comments from managers and supervisors, several improvements and refinements to SIP are being implemented. These include (1) adding key internal control questions to worksheets, (2) reducing the time spent self- inspecting low-risk activities, (3) standardizing MID’s verification and validation inspection reporting format, and (4) developing and implementing a computerized self-inspection reporting system. During our review, we found that some key internal control concepts were not being addressed by self-inspection questions. According to the Comptroller General’s Standards for Internal Control in the Federal Government (Nov. 1999), a key factor in helping federal managers improve accountability and minimize operational problems to better achieve agencies’ missions and program results is to implement appropriate internal control. Internal control is a series of actions and activities that occur throughout an entity’s operations and on an ongoing basis as part of its infrastructure to help managers run the entity and achieve their aims. An organization’s internal control provides reasonable assurance that its operations are effective and efficient, its financial reporting is reliable, and it is in compliance with applicable laws and regulations. Management sets the objectives, puts the control mechanisms and activities in place, and monitors and evaluates the control. Not all self-inspection worksheet questions, however, fully address the concepts in the internal control standards. On the imprest fund worksheet, for example, we found a question that read: “If there is no separation of duties for a cashier who performs other procurement/change making functions, does management perform any extra reviews?” This question, however, was not specific enough to address the segregation of duties internal control activity. “Key duties and responsibilities need to be divided or segregated among different people to reduce the risk of error or fraud. This should include separating the responsibilities for authorizing transactions, processing and recording them, reviewing the transactions, and handling any related assets. No one individual should control all key aspects of a transaction or event.” We believed that the question on the imprest fund worksheet should be revised to address the fundamental concept by specifically asking whether key duties and responsibilities were segregated among different people in the imprest fund area. We also noted that the worksheet for the purchase card program did not have a question about segregation of duties. “An agency must establish physical control to secure and safeguard vulnerable assets. Examples include security for and limited access to assets such as cash, securities, inventories, and equipment which might be vulnerable to risk of loss or unauthorized use. Such assets should be periodically counted and compared to control records.” We discussed our internal control concerns with the MID Director at Customs headquarters, who added the question on unannounced cash counts back into the imprest fund worksheet for the fourth SIP cycle, and began a reassessment of all SIP worksheets to ensure that questions cover basic internal controls. Key duties and responsibilities are divided to reduce the risk of error or mismanagement. No one individual should control all key aspects of a transaction or event. Examples of programs requiring key functions to be separated include procurement (approving, purchasing, and receiving), imprest fund (cashier and approver), and collections and deposits (preparer, verifier and receiver); and Individuals who account for assets are not the same as those that have custody of the assets. Preventing this assures that individuals who record assets can not misappropriate the asset and conceal its whereabouts in the inventory records. MID left the implementation of the revision to the discretion of each assistant commissioner and said that for the revised worksheets to be available for the January 2002 SIP cycle; the worksheets should be completed by October 2001. We were concerned about the lengthy implementation timeframe, but after discussing the issue with program manager representatives of the OFO and OI Assistant Commissioners, we were assured that work on the revisions was well underway. In fact, both the OFO and OI program managers told us that their goal is to complete the revised worksheets in May 2001 so that they can be used for the next SIP cycle beginning in July 2001. In these instances of key internal control questions missing from worksheets whose programs could be subject to financial vulnerability and corruption, we worked closely with MID officials to correct the deficiencies. Because the officials were very responsive to our concerns and took corrective action during this engagement, we will not be making a recommendation on this issue in this report. A second area where improvements are being made is by reducing the time spent self-inspecting low-risk activities. To help reduce the SIP paperwork burden and time requirements, Customs’ activities are being prioritized for self-inspection based on risk level. Many worksheets covering activities considered to be low-risk are to be completed on an annual, instead of semiannual basis. Worksheets covering activities that are vulnerable to fraud and abuse, however, are being retained for semiannual completion. For example, OFO began prioritizing its activities and associated worksheets after the first SIP cycle because they found that it was too burdensome to complete all the worksheets during one cycle. According to the OFO Acting SIP Program Coordinator, worksheets covering high-risk areas such as money, narcotics, or seized property, were retained on a semiannual cycle. OFO placed low-risk activities on an annual cycle, an example being customer service operations. In all, OFO placed the majority of their worksheets—52 out of 78—on an annual, instead of semiannual cycle. OI began prioritizing its activities and worksheets by risk level for the fourth SIP cycle that began in January 2001 for field offices such as SAICs because they have the most worksheets to complete. Like OFO, OI placed activities considered low-risk on an annual, instead of semiannual cycle; an example would be whether a threat analysis has been conducted to determine the potential for fraud investigations. Similar to OFO, the OI SIP Program Coordinator said that worksheets covering high-risk areas such as money, narcotics, seized property, and other sensitive areas were retained on a semiannual cycle. In all, OI placed the majority of its worksheets—16 out of 22—on an annual, instead of semiannual cycle. A third area where MID is making improvements is in the way it reports inspection results. MID is to prepare a management inspection report after each inspection it performs at Customs entities. The report is used to document the identification and correction of management and operational deficiencies and contains various sections such as executive digest, national issues, and SIP verification summaries. Our review of MID inspection reports showed a wide range of reporting formats as well as inconsistent information included in the various report sections. For example, while accompanying MID inspectors to various ports of entry, we asked three MID field directors where in their subsequent reports they would place shortcomings they might find with worksheet questions. Each field director gave us a different section of the report where they would report the findings. MID identified the problem and has recently developed a draft standard operating procedure and format for its verification and validation inspection reports. After consultation with its field directors, MID will finalize the report format, which is to standardize identification of national issues, recommendations, and best practices sections. According to the MID Director, standardizing MID inspection reports should make it much easier to track national trends and subsequent corrective actions, as well as to identify and disseminate best practices among the various Customs’ entities. A final area where progress is being made is developing and implementing a computerized self-inspection reporting system, SIRS. SIRS should make it easier to record and analyze SIP data and track nationwide trends and corrective actions. An important part of the system is operational but the remaining components are under software development or have not yet entered the development process. With the completion of the first component, the results of each SIP worksheet, including corrective actions and certifications, may be entered into SIRS via a personal computer. However, the SIRS component that will allow analyses of SIP data to identify regional, national, and systemic issues is currently being developed with an expected rollout date in August 2001. The final SIRS component, which is not yet under development, is intended to allow MID to track whether corrective actions from MID inspection reports have been taken. MID officials told us that they are unsure when this final component will be available, but they believe it will be early in 2002. Completion of all SIRS components is critical to achieving the full potential of SIP in identifying overall problems and trends and institutionalizing solutions, according to the MID officials. SIP is a work in progress; a program under continuous change and refinement. Although many managers recognized that SIP enables them to obtain more control over the activities that they oversee, many also view it as somewhat burdensome and time-consuming. These and other implementation concerns have been identified and addressed as SIP enters its third full year of operation and completes its fourth self-inspection cycle. Several projects are underway which should contribute to a more streamlined, effective, and less burdensome program. Until SIRS is fully implemented, however, it will be difficult to determine the full extent of the program’s impact on identifying problems and trends and institutionalizing corrective actions. One primary benefit of self-inspection is that as activities are reviewed and results certified as accurate, management should have confidence that problems are being identified and corrected at the local level. This is not the case, however, if worksheet guidance is insufficient and instructions are unclear. Because of inadequate guidance, inaccurate and incomplete self-inspection results can and do occur, reducing the credibility of the program. More can be done to ensure that all worksheets contain complete and clear guidance and instructions for making determinations such as sampling sizes and timeframes. Clarifying and refining all worksheet guidance and instructions will not only assist managers and supervisors in conducting self-inspections, but will also provide approving officials and MID inspectors with a clear set of criteria for determining whether self-inspections were done correctly and completely. This could go a long way toward ensuring that the self-inspection program meets its goals of building accountability and fostering integrity throughout the agency. To help improve SIP, we recommend that the Acting Commissioner of Customs direct the Management Inspections Division Director to review all worksheets and provide clear guidance for proper documentation to support worksheet answers and determining the universe of activity to include in the self-inspection and the appropriate sample size, if required. We requested comments on a draft of this report from the Acting Commissioner of Customs or his designee. On May 22, 2001, the Director, Office of Planning, provided us with written comments, which are reprinted in appendix I. The Director said that most of Customs’ concerns relating to the report had been addressed through discussions with our audit team during the engagement. The Director also said that Customs will shortly be implementing an action plan to fully address our recommendation. We will send copies of this report to Senator Max Baucus, Ranking Member of the Senate Committee on Finance; Representative Charles Rangel, Ranking Minority Member of the House Ways and Means Committee; and Representative Sander Levin, Ranking Minority Member of the House Ways and Means Subcommittee on Trade. In addition, we are providing copies to the Honorable Paul H. O’Neill, the Secretary of the Treasury; Charles W. Winwood, the Acting Commissioner of Customs; and other interested parties. Copies of this report also will be made available to others upon request. This report will also be available on GAO’s homepage at http://www.gao.gov. The major contributors to this report are acknowledged in appendix II. If you or your staff have any questions about the information in this report, please contact me on (202) 512-8777 or Darryl W. Dutton on (213) 830- 1000.
The Customs Service's responsibilities include collecting revenue from imports and enforcing U.S. laws and regulations, preventing the smuggling of drugs into the United States, and overseeing export compliance and money laundering issues. Customs recently began a self-inspection program (SIP) to aid in its diverse responsibilities. This report discusses (1) SIP's use as a mechanism for oversight and accountability, (2) problems related to SIP implementation, and (3) improvements and refinements underway to enhance the value of the program. GAO found that SIP is a useful mechanism for managers to identify and correct problems at the local level and to obtain more control over activities that they oversee. Implementation problems included a lack of detailed instructions on how to complete self-inspection worksheets and inadequate worksheet review by responsible officials. Customs is trying to correct deficiencies in key internal control areas.
The District of Columbia Family Court Act of 2001 (P.L. 107-114) was enacted on January 8, 2002. The act stated that, not later than 90 days after the date of the enactment, the chief judge of the Superior Court shall submit to the president and Congress a transition plan for the Family Court of the Superior Court, and shall include in the plan the following: The chief judge’s determination of the role and function of the presiding judge of the Family Court. The chief judge’s determination of the number of judges needed to serve on the Family Court. The chief judge’s determination of the number of magistrate judges of the Family Court needed for appointment under Section 11-1732, District of Columbia Code. The chief judge’s determination of the appropriate functions of such magistrate judges, together with the compensation of and other personnel matters pertaining to such magistrate judges. A plan for case flow, case management, and staffing needs (including the needs of both judicial and nonjudicial personnel) for the Family Court, including a description of how the Superior Court will handle the one family/one judge requirement pursuant to Section 11-1104(a) for all cases and proceedings assigned to the Family Court. A plan for space, equipment, and other physical needs and requirements during the transition, as determined in consultation with the administrator of General Services. An analysis of the number of magistrate judges needed under the expedited appointment procedures established under Section 6(d) in reducing the number of pending actions and proceedings within the jurisdiction of the Family Court. A proposal for the disposition or transfer to the Family Court of child abuse and neglect actions pending as of the date of enactment of the act (which were initiated in the Family Division but remain pending before judges serving in other divisions of the Superior Court as of such date) in a manner consistent with applicable federal and District of Columbia law and best practices, including best practices developed by the American Bar Association and the National Council of Juvenile and Family Court Judges. An estimate of the number of cases for which the deadline for disposition or transfer to the Family Court cannot be met and the reasons why such deadline cannot be met. The chief judge’s determination of the number of individuals serving as judges of the Superior Court who meet the qualifications for judges of the Family Court and are willing and able to serve on the Family Court. If the chief judge determines that the number of individuals described in the act is less than 15, the plan is to include a request that the Judicial Nomination Commission recruit and the president nominate additional individuals to serve on the Superior Court who meet the qualifications for judges of the Family Court, as may be required to enable the chief judge to make the required number of assignments. The Family Court Act states that the number of judges serving on the Family Court of the Superior Court cannot exceed 15. These judges must meet certain qualifications, such as having training or expertise in family law, certifying to the chief judge of the Superior Court that he or she intends to serve the full term of service and that he or she will participate in the ongoing training programs conducted for judges of the Family Court. The act also allows the court to hire and use magistrate judges to hear family court cases. Magistrate judges must also meet certain qualifications, such as holding U.S. citizenship, being an active member of the D.C. Bar, and having not fewer than 3 years of training or experience in the practice of family law as a lawyer or judicial officer. The act further states that the chief judge shall appoint individuals to serve as magistrate judges not later than 60 days after the date of enactment of the act. The magistrate judges hired under this expedited appointment process are to assist in implementing the transition plan, and in particular, assist with the transition or disposal of child abuse and neglect proceedings not currently assigned to judges in the Family Court. The Superior Court submitted its transition plan on April 5, 2002. The plan consists of three volumes. Volume I contains information on how the court will address case management issues, including organizational and human capital requirements. Volume II contains information on the development of IJIS and its planned applications. Volume III addresses the physical space the court needs to house and operate the Family Court. Courts interact with various organizations and operate in the context of many different programmatic requirements. In the District of Columbia, the Family Court frequently interacts with the child welfare agency—the Child and Family Services Agency (CFSA)—a key organization responsible for helping children obtain permanent homes. CFSA must comply with federal laws and other requirements, including the Adoption and Safe Families Act (ASFA), which placed new responsibilities on child welfare agencies nationwide. ASFA introduced new time periods for moving children who have been removed from their homes to permanent home arrangements and penalties for noncompliance. For example, the act requires states to hold a permanency planning hearing not later than 12 months after the child is considered to have entered foster care. Permanent placements include the child’s return home and the child’s adoption. Other organizations that the Family Court interacts with include the Office of Corporation Counsel (OCC) and the Metropolitan Police Department. The Family Court transition plan provides information on most, but not all, of the elements required by the Family Court Act; however, some aspects of case management, training, and performance evaluation are unclear. For example, the plan describes the Family Court’s method for transferring child abuse and neglect cases to the Family Court, its one family/one judge case management principle, and the number and roles of judges and magistrate judges. However, the plan does not (1) include a request for judicial nomination, (2) indicate the number of nonjudicial staff needed for the Family Court, (3) indicate if the 12 judges who volunteered for the Family Court meet all of the qualifications outlined in the act, and (4) state how the number of magistrate judges to hire under the expedited process was determined. In addition, although not specifically required by the act, the plan does not describe the content of its training programs and does not include a full range of measures by which the court can evaluate its progress in ensuring better outcomes for children. The transition plan establishes criteria for transferring cases to the Family Court and states that the Family Court intends to have all child abuse and neglect cases pending before judges serving in other divisions of the Superior Court closed or transferred into the Family Court by June 2003. According to the plan, the court has asked each Superior Court judge to review his or her caseload to identify those cases that meet the criteria established by the court for the first phase of case transfer back to the Family Court for attention by magistrate judges hired under the expedited process provided in the act. Cases identified for transfer include those in which (1) the child is 18 years of age and older, the case is being monitored primarily for the delivery of services, and no recent allegations of abuse or neglect exist; and (2) the child is committed to the child welfare agency and is placed with a relative in a kinship care program. Cases that the court believes may not be candidates for transfer by June 2002 include those the judge believes transferring the case would delay permanency. The court expects that older cases will first be reviewed for possible closure and expects to transfer the entire abuse and neglect caseloads of several judges serving in other divisions of the Superior Court to the Family Court. Using the established criteria to review cases, the court estimates that 1,500 cases could be candidates for immediate transfer. The act also requires the court to estimate the number of cases that cannot be transferred into the Family Court in the timeframes specified. The plan provides no estimate because the court’s proposed transfer process assumes all cases will be closed or transferred, based on the outlined criteria. However, the plan states that the full transfer of all cases is partially contingent on hiring three new judges. The transition plan identifies the way in which the Family Court will implement the one family/one judge approach and improve its case management practices; however, some aspects of case management, training, and performance evaluation are unclear. The plan indicates that the Family Court will implement the one family/one judge approach by assigning all cases involving the same family to one judicial team— comprised of a Family Court judge and a magistrate judge. This assignment will begin with the initial hearing by the magistrate judge on the team and continue throughout the life of the case. Juvenile and family court experts indicated that this team approach is realistic and a good model of judicial collaboration. One expert said that such an approach provides for continuity if either team member is absent. Another expert added that, given the volume of cases that must be heard, the team approach can ease the burden on judicial resources by permitting the magistrate judge to make recommendations and decisions, thereby allowing the Family Court judge time to schedule and hear trials and other proceedings more quickly. Court experts also praised the proposed staggered terms for judicial officials—newly-hired judges, magistrate judges, and judges who are already serving on the Superior Court will be appointed to the Family Court for varying numbers of years—which can provide continuity while recognizing the need to rotate among divisions in the Superior Court. The plan also describes other elements of the Family Court’s case management process, such as how related cases will be assigned and a description of how many judges will hear which types of cases. For example, the plan states that, in determining how to assign cases, preference will generally be given to the judge or magistrate judge who has the most familiarity with the family. In addition, the plan states that (1) all Family Court judges will handle post-disposition child abuse and neglect cases; (2) 10 judges will handle abuse and neglect cases from initiation to closure as part of a judicial team; (3) 1 judge will handle abuse and neglect cases from initiation to closure independently (not as part of a team); and (4) certain numbers of judges will handle other types of cases, such as domestic relations cases, mental health trials, and complex family court cases. However, because the transition plan focuses primarily on child abuse and neglect cases, this information does not clearly explain how the total workload associated with the approximately 24,000 cases under the court’s jurisdiction will be handled. One court expert we consulted commented on the transition plan’s almost exclusive focus on child welfare cases, making it unclear, the expert concluded, how other cases not involving child abuse and neglect will be handled. In addition to describing case assignments, the plan identifies actions the court plans to take to centralize intake. According to the plan, a centralized office will encompass all filing and intake functions that various clerks’ offices—such as juvenile, domestic relations, paternity and support, and mental health—in the Family Court currently carry out. As part of centralized intake, case coordinators will identify any related cases that may exist in the Family Court. To do this, the coordinator will ensure that a new “Intake/Cross Reference Form” will be completed by various parties to a case and also check the computer databases serving the Family Court. As a second step, the court plans to use alternative dispute resolution to resolve cases more quickly and expand initial hearings to address many of the issues that the court previously handled later in the life of the case. As a third step, the plan states that the Family Court will provide all affected parties speedy notice of court proceedings and implement strict policies for the handling of cases—such as those for granting continuances—although it does not indicate who is responsible for developing the policies or the status of their development. The plan states that the court will conduct evaluations to assess whether components of the Family Court were implemented as planned and whether modifications are necessary; the court could consider using additional measures to focus on outcomes for children. One court expert said that the court’s development of a mission statement and accompanying goals and objectives frames the basis for developing performance standards. The expert also said that the goals and standards are consistent with those of other family courts that strive to prevent further deterioration of a family’s situation and to focus decision-making on the needs of those individuals served by the court. However, evaluation measures listed in the plan are oriented more toward the court’s processes, such as whether hearings are held on time, than on outcomes. According to a court expert, measures must also account for outcomes the court achieves for children. Measures could include the number of finalized adoptions that did not disrupt, reunifications that do not fail, children who remain safe and are not abused again while under court jurisdiction or in foster care, and the proportion of children who successfully achieve permanency. In addition, the court will need to determine how it will gather the data necessary to measure each team’s progress in ensuring such outcomes or in meeting the requirements of ASFA, and the court has not yet established a baseline from which to judge its performance. In our May 2002 report, we recommended that the Superior Court consider identifying performance measures to track progress toward positive outcomes for the children and families the Family Court serves. The transition plan states that the court has determined that 15 judges are needed to carry out the duties of the court and that 12 judges have volunteered to serve on the court, but does not address recruitment and the nomination of the three additional judges. Court experts stated that the court’s analysis to identify the appropriate number of judges is based on best practices identified by highly credible national organizations and is, therefore, pragmatic and realistic. However, the plan only provides calculations for how it determined that the court needed 22 judges and magistrate judges to handle child abuse and neglect cases. The transition plan does not include a methodology for how it determined that the court needed a total of 32 judges and magistrate judges for its total caseload of child abuse and neglect cases, as well as other family cases, such as divorce and child support, nor does it explain how anticipated increases in cases will be handled. In addition, the plan does not include a request that the Judicial Nomination Commission recruit and the president nominate the additional three individuals to serve on the Superior Court, as required by the Family Court Act. At a recent hearing on the court’s implementation of the Family Court Act, the chief judge of the Superior Court said that the court plans to submit its request in the fall of 2002. The Superior Court does not provide in the plan its determination of the number of nonjudicial staff needed. The court acknowledges that while it budgeted for a certain number of nonjudicial personnel based on current operating practices, determining the number of different types of personnel needed to operate the Family Court effectively is pending completion of a staffing study. In our May 2002 report, we recommended that the Superior Court supplement its transition plan by providing information on the number of nonjudicial personnel needed when the staffing study is complete. Furthermore, the plan does not address the qualifications of the 12 judges who volunteered for the court. Although the plan states that these judges have agreed to serve full terms of service, according to the act, the chief judge of the Superior Court may not assign an individual to serve on the Family Court unless the individual also has training or expertise in family law and certifies that he or she will participate in the ongoing training programs conducted for judges of the Family Court. In our May 2002 report, we recommended that the Superior Court supplement its transition plan by providing information on the qualifications of the 12 judges identified in the transition plan to serve on the Family Court. The act also requires judges who had been serving in the Superior Court’s Family Division at the time of its enactment to serve for a term of not fewer than 3 years, and that the 3-year term shall be reduced by the length of time already served in the Family Division. Since the transition plan does not identify which of the 12 volunteers had already been serving in the Family Division prior to the act and the length of time they had already served, the minimum remaining term length for each volunteer cannot be determined from the plan. In commenting on our May 2002 report, the Superior Court said it would provide information on each judge’s length of tenure in its first annual report to the Congress. The transition plan describes the duties of judges assigned to the Family Court, as required by the act. Specifically, the plan describes the roles of the designated presiding judge, the deputy presiding judge, and the magistrate judges. The plan states that the presiding and deputy presiding judges will handle the administrative functions of the Family Court, ensure the implementation of the alternative dispute resolution projects, oversee grant-funded projects, and serve as back-up judges to all Family Court judges. These judges will also have a post-disposition abuse and neglect caseload of more than 80 cases and will continue to consult and coordinate with other organizations (such as the child welfare agency), primarily by serving on 19 committees. One court expert has observed that the list of committees to which the judges are assigned seems overwhelming and said that strong leadership by the judges could result in consolidation of some of the committees’ efforts. The plan also describes the duties of the magistrate judges, but does not provide all the information required by the act. Magistrate judges will be responsible for initial hearings in new child abuse and neglect cases and the resolution of cases assigned to them by the Family Court judge to whose team they are assigned. They will also be assigned initial hearings in juvenile cases, noncomplex abuse and neglect trials, and the subsequent review and permanency hearings, as well as a variety of other matters related to domestic violence, paternity and support, mental competency, and other domestic relations cases. As noted previously, one court expert said that the proposed use of the magistrate judges would ease the burden on judicial resources by permitting these magistrate judges to make recommendations and decisions. However, although specifically required by the act, the transition plan does not state how the court determined the number of magistrate judges to be hired under the expedited process. In addition, while the act outlines the qualifications of magistrate judges, it does not specifically require a discussion of qualifications of the newly hired magistrate judges in the transition plan. As a result, no information was provided, and whether these magistrate judges meet the qualifications outlined in the act is unknown. In our May 2002 report, we recommended that the Superior Court supplement its transition plan by providing information on the analysis it used to identify the number of magistrate judges needed under the expedited appointment procedures. In commenting on that report, the Superior Court said that it considered the following in determining how many magistrate judges should be hired under the expedited process: optimal caseload size, available courtroom and office space, and safety and permanency of children. In addition, the court determined, based on its criteria, that 1,500 child abuse and neglect cases could be safely transferred to the Family Court during the initial transfer period and that a caseload of 300 cases each was appropriate for these judicial officers. As a result, the court appointed five magistrate judges on April 8, 2002. A discussion of how the court will provide initial and ongoing training for its judicial and nonjudicial staff is also not required by the act, although the court does include relevant information about training. For example, the plan states that the Family Court will develop and implement a quarterly training program for Family Court judges, magistrate judges, and staff covering a variety of topics and that it will promote and encourage participation in cross-training. In addition, the plan states new judges and magistrate judges will participate in a 2 to 3 week intensive training program, although it does not provide details on the content of such training for the five magistrate judges hired under the expedited process, even though they were scheduled to begin working at the court on April 8, 2002. One court expert said that a standard curriculum for all court-related staff and judicial officers should be developed and that judges should have manuals available outlining procedures for all categories of cases. In commenting on our May 2002 report, the Superior Court said that the court has long had such manuals for judges serving in each division of the court. In our report on human capital, we said that an explicit link between the organization’s training offerings and curricula and the competencies identified by the organization for mission accomplishment is essential. Organization leaders can show their commitment to strategic human capital management by investing in professional development and mentoring programs that can also assist in meeting specific performance needs. These programs can include opportunities for a combination of formal and on-the-job training, individual development plans, and periodic formal assessments. Likewise, organizations should make fact-based determinations of the impact of its training and development programs to provide feedback for continuous improvement and ensure that these programs improve performance and help achieve organizational results. In commenting on our May 2002 report, the Superior Court said that— although not included in the plan—it has an extensive training curriculum that will be fine-tuned prior to future training sessions. While the court’s transition plan specifies initiatives to coordinate court activities with social services, the Family Court and District social service agencies face challenges in coordinating their respective activities and services in the longer term, such as the time it will take to obtain interagency commitments to provide resources and to coordinate their use. Today, we can offer some preliminary observations of efforts to coordinate family court activities with social services—our ongoing examination of these efforts and related challenges will culminate in a more detailed assessment of factors that facilitate and hinder planned coordination later this year. Collectively, the Family Court Act and court practices recommended by various national associations provide a framework for planning, establishing, and sustaining court activities that are coordinated with related social services. Specifically, the act requires the mayor, in consultation with the chief judge of the Superior Court, to make staff of District offices that provide social services and other related services to individuals and families served by the Family Court available on-site at the Family Court to coordinate the provision of services. These offices include CFSA, District of Columbia Public Schools, the Housing Authority, OCC, the Metropolitan Police Department, and the Department of Health. The act also requires the heads of each specified office to provide the mayor with such information, assistance, and services as the mayor may require. In addition, the mayor must appoint a liaison between the Family Court and the District government for purposes of coordinating the delivery of services provided by the District government with the activities of the Family Court. National associations, such as the National Center for State Courts, the National Council of Juvenile and Family Court Judges, and the Council for Court Excellence, have also recommended court practices to enhance service coordination and thereby aid in the timely resolution of cases. Key elements that can help establish and maintain coordinated services include: Case management—decisions by judicial officers, nonjudicial officers, legal representatives, and officials from other agencies that link children and families to needed services. According to the National Center for State Courts, for example, effective case-level service coordination requires the involvement of individuals familiar with both the legal and service areas. Service coordinators can be court or social service agency employees and can be composed of individuals or teams. Operational integration—organizational commitments and integrated operations that routinely link court and social service priorities, resources, and decisions. For example, in the interest of integrating court and agency operations, the National Center for State Courts reported that various jurisdictions have established a formal or informal policy committee to discuss issues of relevance to all entities involved in providing services to children and families served by the court. In addition, courts can play a key role in providing centralized access to a network of social services. In some cases, this role includes establishing courthouse resource centers to carry out service referrals or mandates immediately. The Family Court has begun several initiatives to integrate its activities with the social services provided by other District agencies. At the case management level, the court states in its transition plan that it intends to focus increased attention on family matters to ensure that cases are resolved expeditiously and in the best interests of children and families. The family court will use case coordinators, child protection mediators, attorney advisors, and other legal representatives to support the functioning of the judicial team. In addition, the court has asked OCC to assign attorneys to particular judicial teams and anticipates guardians ad litem, parents’ attorneys, and social workers being assigned to particular teams as well. For example, the court said in its April 24, 2002, testimony before the Subcommittee on D.C. Appropriations, Senate Committee on Appropriations, that it has offered CFSA the opportunity to identify clusters of social workers that could be assigned to the teams. To help achieve operational coordination, the court established interagency committees—the Family Court Implementation Committee and the Child Welfare Leadership Team—that include representatives from CFSA and other agencies. According to court officials, these committees constitute the court’s major vehicle for collaborating with other agencies. In addition, the presiding and deputy presiding judges of the Family Court will meet monthly with heads of CFSA, District of Columbia Department of Mental Health, OCC, Public Defender Services, District of Columbia Public Schools, and the Family Division Trial Lawyers Association in an effort to resolve any interagency problems and to coordinate services that affect the child welfare cases filed in Family Court. Other Family Court initiatives to achieve coordinated services include the Family Service Center, which will be comprised of the following agencies under the direction of the mayor: District of Columbia Public Schools, District of Columbia Housing Authority, CFSA, OCC, Metropolitan Police Department, and the Department of Health. In achieving coordinated services in the longer term, the court faces several challenges. For example, the court’s transition plan states that until certain key agencies, such as CFSA and OCC, are sufficiently staffed and reorganized to complement the changes taking place in the Family Court, substantial improvements in the experiences of children and families served by the court will remain a challenge. Moreover, to the extent that improvements in the agencies and the court do not happen simultaneously, or improvements in one do not keep pace with the others, the court has concluded that the collective ability to collaborate will become compromised. The court also said in its April 24, 2002, testimony that it takes time to obtain interagency commitments to coordinate the use of staff resources. Finally, the availability of the Family Service Center as a forum to coordinate services depends on the timely completion of complex and interdependent space and facilities plans discussed in more detail below. Two factors are critical to fully transitioning to the Family Court in a timely and effective manner: obtaining and renovating appropriate space for all new Family Court personnel and developing and installing a new automated information system, currently planned as part of the D.C. Courts IJIS system. The court acknowledges that its implementation plans may be slowed if appropriate space cannot be obtained in a timely manner. For example, the plan addresses how the abuse and neglect cases currently being heard by judges in other divisions of the Superior Court will be transferred to the Family Court, but states that the complete transfer of cases hinges on the court’s ability to hire, train, and provide appropriate space for additional judges and magistrate judges. In addition, the Family Court’s current reliance on nonintegrated automated information systems that do not fully support planned court operations, such as the one family/one judge approach to case management, constrains its transition to a Family Court. The transition plan states that the interim space plan carries a number of project risks. These include a very aggressive implementation schedule and a design that makes each part of the plan interdependent with other parts of the plan. The transition plan further states that the desired results cannot be reached if each plan increment does not take place in a timely fashion. For example, obtaining and renovating the almost 30,000 occupiable square feet of new court space needed requires a complex series of interrelated steps—from moving current tenants in some buildings to temporary space, to renovating the John Marshall level of the H. Carl Moultrie Courthouse by July 2003. The Family Court of the Superior Court is currently housed in the H. Carl Moultrie Courthouse, and interim plans call for expanding and renovating additional space in this courthouse to accommodate the additional judges, magistrate judges, and staff who will help implement the D.C. Family Court Act. The court estimates that accommodating these judges, magistrate judges, and staff requires an additional 29,700 occupiable square feet, plus an undetermined amount for security and other amenities. Obtaining this space will require nonrelated D.C. Courts entities to vacate space to allow renovations, as well as require tenants in other buildings to move to house the staff who have been displaced. The plan calls for renovations under tight deadlines and all required space may not be available, as currently planned, to support the additional judges the Family Court needs to perform its work in accordance with the act, making it uncertain as to when the court can fully complete its transition. For example, D.C. Courts recommends that a portion of the John Marshall level of the H. Carl Moultrie Courthouse, currently occupied by civil court functions, be vacated and redesigned for the new courtrooms and court-related support facilities. Although some space is available on the fourth floor of the courthouse for the four magistrate judges to be hired by December 2002, renovations to the John Marshall level are tentatively scheduled for completion in July 2003—2 months after the court anticipates having three additional Family Court judges on board. The Family Service Center will also be housed on this level. Another D.C. Courts building—Building B—would be partially vacated by non-court tenants and altered for use by displaced civil courts functions and other units temporarily displaced in future renovations. Renovations to Building B are scheduled to be complete by August 2002. Space for 30 additional Family Court-related staff, approximately 3,300 occupiable square feet, would be created in the H. Carl Moultrie Courthouse in an as yet undetermined location. Moreover, the Family Court’s plan for acquiring additional space does not include alternatives that the court will pursue if its current plans for renovating space encounter delays or problems that could prevent it from using targeted space. The Family Court act calls for an integrated information technology system to support the goals it outlines, but a number of factors significantly increase the risks associated with this effort, as we reported in February 2002. For example, The D.C. Courts had not yet implemented the disciplined processes necessary to reduce the risks associated with acquiring and managing IJIS to acceptable levels. A disciplined software development and acquisition effort maximizes the likelihood of achieving the intended results (performance) on schedule using available resources (costs). The requirements contained in a draft Request for Proposal (RFP) lacked the necessary specificity to ensure that any defects in these requirements had been reduced to acceptable levels and that the system would meet its users’ needs. Studies have shown that problems associated with requirements definition are key factors in software projects that do not meet their cost, schedule, and performance goals. The requirements contained in the draft RFP did not directly relate to industry standards. As a result, inadequate information was available for prospective vendors and others to readily map systems built upon these standards to the needs of the D.C. Courts. Prior to issuing our February 2002 report, we discussed our findings with D.C. Courts officials, who generally concurred with our findings. The officials said that the D.C. Courts would not go forward with the project until the necessary actions had been taken to reduce the risks associated with developing the new information system. In our report, we made several recommendations designed to reduce the risks. In April 2002, we met with D.C. Courts officials to discuss the actions taken on our recommendations and found that significant actions have been initiated that, if properly implemented, will help reduce the risks associated with this effort. For example, D.C. Courts is beginning the work to provide the needed specificity for its system requirements. This includes soliciting requirements from the users and ensuring that the requirements are properly sourced (e.g., traced back to their origin). According to D.C. Courts officials, this work has identified significant deficiencies in the original requirements that we discussed in our February 2002 report. These deficiencies relate to new tasks D.C. Courts must undertake. For example, the Family Court Act requires D.C. Courts to interface IJIS with several other District government computer systems. These tasks were not within the scope of the original requirements that we reported on in our February 2002 report. issuing a Request for Information to obtain additional information on commercial products that should be considered by the D.C. Courts during its acquisitions. This helps the requirements management process by identifying requirements that are not supported by commercial products so that the D.C. Courts can reevaluate whether it needs to (1) keep the requirement or revise it to be in greater conformance with industry practices or (2) undertake a development effort to achieve the needed capability. developing a systems engineering life-cycle process for managing the D.C. Courts information technology efforts. This will help define the processes and events that should be performed from the time that a system is conceived until the system is no longer needed. Examples of processes used include requirements development, testing, and implementation. developing policies and procedures that will help ensure that the D.C. Courts’ information technology investments are consistent with the requirements of the Clinger-Cohen Act of 1996 (P.L. 104-106). developing the processes that will enable the D.C. Courts to achieve a level 2 rating—this means basic project management processes are established to track performance, cost, and schedule—on the Software Engineering Institute’s Capability Maturity Model. In addition, D.C. Courts officials told us that they are developing a program modification plan that will allow the use of existing (legacy) systems while the IJIS project proceeds. Although they recognize that maintaining two systems concurrently is expensive and causes additional resource needs, such as additional staff and training for them, these officials believe that they are needed to mitigate the risk associated with any delays in system implementation. Although these are positive steps forward, D.C. Courts still faces many challenges in its efforts to develop an IJIS system that will meet its needs and fulfill the goals established by the act. Examples of these include: Ensuring that the Systems Interfacing with IJIS Do Not Become the Weak Link The Family Court Act calls for effectively interfacing information technology systems operated by the District government with IJIS. According to D.C. Courts officials, at least 14 District systems will need to interface with IJIS. However, several of our reviews have noted problems in the District’s ability to develop, acquire, and implement new systems.The District’s difficulties in effectively managing its information technology investments could lead to adverse impacts on the IJIS system. For example, the interface systems may not be able to provide the quality of data necessary to fully utilize IJIS’s capabilities or provide the necessary data to support IJIS’s needs. The D.C. Courts will need to ensure that adequate controls and processes have been implemented to mitigate the potential impacts associated with these risks. Effectively Implementing the Disciplined Processes Necessary to Reduce the Risks Associated with IJIS The key to having a disciplined effort is to have disciplined processes in multiple areas. This is a complex task and will require the D.C. Courts to maintain its management commitment to implementing the necessary processes. In our February 2002 report, we highlighted several processes, such as requirements management, risk management, and testing that appeared critical to the IJIS effort. Ensuring that the Requirements Used to Acquire IJIS Contain the Necessary Specificity to Reduce Requirement-Related Defects to Acceptable Levels Although D.C. Courts officials have said that they are adopting a requirements management process that will address the concerns expressed in our February 2002 report, maintaining such a process will require management commitment and discipline. Ensuring that Users Receive Adequate Training As with any new system, adequately training the users is critical to its success. As we reported in April 2001, one problem that hindered the implementation of the District’s financial management system was its difficulty in adequately training the users of the system. In commenting on our May 2002 report, the Superior Court said that $800,000 has been budgeted for staff training during the 3 years of implementation. According to D.C. Courts officials, the Family Court Act establishes ambitious timeframes to convert to a family court. Although schedules are important, it is critical that the D.C. Courts follow an event-driven acquisition and development program rather than adopting a schedule- driven approach. Organizations that are schedule-driven tend to reduce or inadequately complete activities such as business process reengineering and requirements analysis. These tasks are frequently not considered “important” since many people view “getting the application in the hands of the user” as one of the more productive activities. However, the results of this approach are very predictable. Projects that do not perform planning and requirements functions well typically have to redo that work later. However, the costs associated with delaying the critical planning and requirements activities is anywhere from 10 to 100 times the cost of doing it correctly in the first place. With respect to requirements, court experts report that effective technological support is critical to effective family court case management. One expert said that, at a minimum, the system should include the (1) identification of parties and their relationships; (2) tracking of case processing events through on-line inquiry; (3) generation of orders, forms, summons, and notices; and (4) production of statistical reports. The State Justice Institute’s report on how courts are coordinating family cases states that automated information systems, programmed to inform a court system of a family’s prior cases, are a vital ingredient of case coordination efforts. The National Council of Juvenile and Family Court Judges echoes these findings by stating that effective management systems (1) have standard procedures for collecting data; (2) collect data about individual cases, aggregate caseload by judge, and the systemwide caseload; (3) assign an individual the responsibility of monitoring case processing; and (4) are user friendly. While anticipating technological enhancements through IJIS, Superior Court officials said that the current information systems do not have the functionality required to implement the Family Court’s one family/one judge case management principle. In providing technical clarifications on a draft of this report, the Superior Court reiterated a statement that the presiding judge of the Family Court made at the April 24, 2002, hearing. The presiding judge said that the Family Court is currently implementing the one family/one judge principle, but that existing court technology is cumbersome to use to identify family and other household members. Nonetheless, staff are utilizing the different databases, forms, intake interviews, questions from the bench, and other nontechnological means of identifying related cases within the Family Court. Basically, even though some important issues are not discussed, the Superior Court’s transition plan represents a good effort at outlining the steps it will take to implement a Family Court. While the court has taken important steps to achieve efficient and effective operations, it still must address several statutory requirements included in the Family Court Act to achieve full compliance with the act. In addition, opportunities exist for the court to adopt other beneficial practices to help ensure it improves the timeliness of decisions in accordance with ASFA, that judges and magistrate judges are fully trained, and that case information is readily available to aid judges and magistrate judges in their decision making. Acknowledging the complex series of events that must occur in a timely way to achieve optimal implementation of the family court, the court recognizes that its plan for obtaining and renovating needed physical space warrants close attention to reduce the risk of project delays. In addition, the court has initiated important steps that begin to address many of the shortcomings we identified in our February 2002 report on its proposed information system. The effect of these actions will not be known for some time. The court’s actions reflect its recognition that developing an automated information system for the Family Court will play a pivotal role in the court’s ability to implement its improved case management framework. In commenting on our May 2002 report, the court generally agreed with our findings and concurred with our recommendations. Our final report on the mayor’s plan to coordinate social services, integrate automated information systems, and develop a spending plan to support these initiatives may discuss some additional actions the mayor and court might take to further enhance their ability to achieve intended service coordination and systems integration. By following through on the steps it has begun to take and by evaluating its performance over time, the court may improve its implementation of the Family Court Act and provide a sound basis for assessing the extent to which it achieves desired outcomes for children. Madam Chairman, this concludes my prepared statement. I will be happy to respond to any questions that you or other members of the subcommittee may have. For further contacts regarding this testimony, please call Cornelia M. Ashby at (202) 512-8403. Individuals making key contributions to this testimony included Diana Pietrowiak, Mark Ward, Nila Garces-Osorio, Steven J. Berke, Patrick DiBattista, William Doherty, John C. Martin, Susan Ragland, and Norma Samuel.
The District of Columbia Superior Court has made progress in planning the transition of its Family Division to a Family Court, but some challenges remain. The Superior Court's transition plan addresses most, but not all, of the required elements outlined in the District of Columbia Family Court Act of 2001. Significantly, the completion of the transition hinges on timely completion of a complex series of interdependent plans intended to obtain and renovate physical space to house the court and its functions. All required space may not be available, as currently planned, to support the additional judges the Family Court needs to perform its work in accordance with the act, making it uncertain as to when the court can fully complete its transition. Although not required as part of its transition plan efforts, the Superior Court has begun to coordinate its activities with social services agencies in the District. However, the court and agencies face challenges in achieving coordinated services in the longer term. Finally, the development and application of the District of Columbia Courts' Integrated Justice Information System will be critical for the Family Court to be able to operate effectively, evaluate its performance, and meet its judicial goals in the context of the changes mandated by the Family Court Act.
Given the size and significance of the government’s investment in IT, it is important that projects be managed effectively to ensure that public resources are wisely invested. Effectively managing projects entails, among other things, developing reliable and high-quality cost estimates that project realistic life-cycle costs. A life-cycle cost estimate provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program. In essence, life cycle can be thought of as a “cradle to grave” approach to managing a program throughout its useful life. Because a life-cycle cost estimate encompasses all past (or sunk), present, and future costs for every aspect of the program, regardless of funding source, it provides a wealth of information about how much programs are expected to cost over time. We have previously reported that a reliable cost estimate is critical to the success of any government acquisition program, as it provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction, and accountability for results. Having a realistic, up-to-date estimate of projected costs—one that is continually revised as the program matures—can be used to support key program decisions and milestone reviews. In addition, the estimate is often used to determine the program’s budget spending plan, which outlines how and at what rate the program funding will be spent over time. Because a reasonable and supportable budget is essential to a program’s efficient and timely execution, a reliable estimate is the foundation of a good budget. However, we have also found that developing reliable cost estimates has been difficult for agencies across the federal government.Too often, programs cost more than expected and deliver results that do not satisfy all requirements. In 2006, the Office of Management and Budget (OMB) updated its Capital Programming Guide, which requires agencies to develop a disciplined cost-estimating capability to provide greater information management support, more accurate and timely cost estimates, and improved risk assessments to help increase the credibility of program cost estimates. Further, according to OMB, programs must maintain current and well- documented estimates of costs, and these estimates must encompass the full life cycle of the program. Among other things, OMB states that generating reliable cost estimates is a critical function necessary to support OMB’s capital programming process. Without this ability, programs are at risk of experiencing cost overruns, missed deadlines, and performance shortfalls. Building on OMB’s requirements, in March 2009, we issued a guide on best practices for estimating and managing program costs that highlights the policies and practices adopted by leading organizations to implement Specifically, these best practices an effective cost-estimating capability. OMB, Circular No. A-11, Preparation, Submission, and Execution of the Budget (Washington, D.C.: Executive Office of the President, June 2006) and Capital Programming Guide: Supplement to Circular A-11, Part 7, Planning, Budgeting, and Acquisition of Capital Assets (Washington, D.C.: Executive Office of the President, June 2006). OMB first issued the Capital Programming Guide as a supplement to the 1997 version of Circular A-11, Part 3. We refer to the 2006 version. OMB later updated this guide again in August 2011. identify the need for organizational policies that define a clear requirement for cost estimating; require compliance with cost-estimating best practices; require management review and acceptance of program cost estimates; provide for specialized training; establish a central, independent cost-estimating team; require a standard structure for defining work products; and establish a process to collect and store cost- related data. In addition, the cost-estimating guide identifies four characteristics of a reliable cost estimate that management can use for making informed program and budget decisions: a reliable cost estimate is comprehensive, well-documented, accurate, and credible. Specifically, an estimate is comprehensive when it accounts for all possible costs associated with a program, is structured in sufficient detail to ensure that costs are neither omitted nor double counted, and documents all cost- influencing assumptions; well-documented when supporting documentation explains the process, sources, and methods used to create the estimate, contains the underlying data used to develop the estimate, and is adequately reviewed and approved by management; accurate when it is not overly conservative or optimistic, is based on an assessment of the costs most likely to be incurred, and is regularly updated so that it always reflects the current status of the program; and credible when any limitations of the analysis because of uncertainty or sensitivity surrounding data or assumptions are discussed, the estimate’s results are cross-checked, and an independent cost estimate is conducted by a group outside the acquiring organization to determine whether other estimating methods produce similar results. We have previously reported on weaknesses associated with the implementation of sound cost-estimating practices at various agencies and the impact on budget and program decisions. For example, In January 2012, we reported that the Internal Revenue Service did not have comprehensive guidance for cost estimating. Specifically, the agency’s guidance did not clearly discuss the appropriate uses of different types of cost estimates. Further, our review of the agency’s Information Reporting and Document Matching program’s cost estimate found it was unreliable. Among other things, the program’s projected budget of $115 million through fiscal year 2016 was only partly supported by the cost estimate, which included costs only through fiscal year 2014. As a result, the agency did not have a reliable basis for the program’s budget projection. We made multiple recommendations to improve the quality of the agency’s cost and budget information, including ensuring that the Information Reporting and Document Matching program’s cost estimate is reliable and that the agency’s cost-estimating guidance is consistent and clearly requires the use of current and reliable cost estimates to inform budget requests. The agency partially agreed with these recommendations and stated that they have taken steps to ensure that their cost-estimating practices and procedures follow consistent documented guidance. In January 2010, we reported that the Department of Energy lacked comprehensive policy for cost estimating, making it difficult for the agency to oversee development of high-quality cost estimates. Specifically, the agency’s policy did not describe how estimates should be developed and did not establish a central office for cost estimating. Further, we reviewed four programs at the department, each estimated to cost approximately $900 million or more, and reported that they did not have reliable cost estimates. For example, three of the cost estimates did not include costs for the full life cycles of the programs, omitting operations and maintenance costs or portions of program scope. Additionally, three of the cost estimates did not use adequate data, one of which relied instead on professional opinion. Further, the cost estimates did not fully incorporate risk— specifically, they did not address correlated risks among project activities. As a result, these programs were more likely to exceed their estimates and require additional funding to be completed. We made multiple recommendations to improve cost estimating at the department, including updating its cost-estimating policy and guidance and ensuring cost estimates are developed in accordance with best practices. The Department of Energy generally agreed with our recommendations and stated that it had several initiatives underway to improve cost-estimating practices, including the development of a new cost-estimating policy and guidance, a historical cost database to support future estimates, and additional training courses. Finally, we reported in December 2009 that the Department of Veterans Affairs had 18 construction projects that had experienced cost increases due, in part, to unreliable cost estimates. For example, many estimates were completed quickly, one of which was a rough-order-of-magnitude estimate that was not intended to be relied on as a budget-quality estimate of full project costs. Additionally, we found that some projects had not conducted a risk analysis to quantify the impact of risk on the total estimated costs. As a result, in some cases, projects had to change scope to meet their initial estimate and, in others, additional funds had to be requested from Congress to allow the agency to complete the project. We recommended that the department improve cost estimating at major construction projects by conducting cost risk analyses and mitigating risks that may influence projects’ costs. The Department of Veterans Affairs agreed with our recommendation and stated that it was taking steps, such as developing a multiyear construction plan to ensure that reliable projections of program costs are available for budgeting purposes, and planning to improve its risk analyses. According to OMB, agencies should develop a disciplined cost- estimating capability to provide greater information management support, more accurate and timely cost estimates, and improved risk assessments to help increase the credibility of program cost estimates. In addition, we have reportedpolicies and procedures that that leading organizations establish cost-estimating define a clear requirement for cost estimating; identify and require compliance with cost-estimating best practices, and validate their use; require that estimates be reviewed and approved by management; require and enforce training in cost estimating; establish a central, independent cost-estimating team; require, at a high level, a standard, product-oriented work breakdown structure; and establish a process for collecting and storing cost-related data to support future estimates. Table 1 describes the key components of an effective cost-estimating policy. While the eight agencies varied in the extent to which their cost- estimating policies and procedures addressed best practices, most did not address several key components of an effective policy. Specifically, only the Department of Defense’s (DOD) policy was fully consistent with all seven components. While the Department of Homeland Security addressed most components of an effective cost-estimating policy, other agencies’ policies had significant weaknesses, particularly in cost- estimating training and in establishing a process to collect and store cost- related data. Table 2 provides a detailed assessment of each agency’s policies against the components of an effective cost-estimating policy. In addition, a discussion of each policy component follows the table. Clear requirement for cost estimating: Six of the eight agencies fully addressed this policy component by establishing a clear requirement for all programs to perform life-cycle cost estimates, and in certain cases specified more stringent requirements for programs designated as major investments. Among these, four agencies—the Department of Agriculture, the Environmental Protection Agency (EPA), the Department of Labor, and the Department of Justice—established this requirement as part of their policies for programs to perform a cost- benefit analysis. For example, Labor required a life-cycle cost estimate as part of a cost-benefit analysis for both major and nonmajor investments, with less detail required for nonmajor investments. The other two agencies—DOD and Homeland Security—defined a separate requirement for programs to develop life-cycle cost estimates. For the two agencies that did not fully establish a clear requirement for cost estimating, the Department of Veterans Affairs partially addressed this component because its policy only requires cost estimates to be prepared for project increments, rather than the full program life cycle. In addition, the Department of Commerce partially addressed this component because its policies only require cost estimates to be prepared for contracts, rather than for the full program life cycle (including government and contractor costs). Officials at both agencies stated that the responsibility for establishing requirements for cost estimating had been delegated to their component agencies. Further, officials at these two agencies described steps planned to address this and other weaknesses. For example, Veterans Affairs officials stated that the agency’s recently established Office of Corporate Analysis and Evaluation (part of the Office of Planning and Policy) is planning to establish a centralized cost-estimating policy that includes clear criteria for cost estimating, which it expects to complete in fiscal year 2012. Further, Commerce officials stated that the agency is currently in the process of updating its policy and guidance to address this and other weaknesses, which it plans to complete by October 2012. If the updated policies and guidance address the weaknesses we identified, decision makers should have an improved view of their programs’ life-cycle costs. Compliance with cost-estimating best practices: Three of the eight agencies (DOD, Homeland Security, and Labor) fully addressed this policy component by identifying and requiring the use of cost- estimating best practices by their programs, and defining a process to validate their use. For example, Homeland Security draws on the GAO cost guide to identify cost-estimating best practices, and also provides agency-specific cost-estimating requirements for implementing the practices, such as identifying the cost-estimate documentation required. The agency’s policy also requires that estimates for key programs be validated. For the three agencies that partially addressed this policy component—Agriculture, EPA, and Justice—all provided guidance to their programs specific to conducting a cost-benefit analysis; however, this guidance did not fully address important cost-estimating practices, such as conducting a risk and uncertainty analysis, updating the estimate, or comparing the estimate to an independent estimate. Their guidance also did not identify a mechanism for validating estimates. Lastly, two agencies— Commerce and Veterans Affairs—had not addressed this policy component, which corresponds to our finding that these agencies did not have requirements for programs to prepare cost estimates. Among the five agencies that did not fully address this policy component, officials commonly stated that the responsibility for requiring compliance with best practices had been delegated to their component agencies or that addressing cost-estimating shortcomings had not been a priority. Without fully complying with best practices for developing cost estimates, programs are less likely to prepare reliable cost estimates, hindering agency decision making. Management review and approval: Three of the eight agencies (DOD, Homeland Security, and Labor) fully addressed this policy component by requiring that program cost estimates be reviewed and approved by management, including defining the information to be presented and requiring that approval be documented. For example, Labor’s policy requires that senior management at both the component agency responsible for the program and the Office of the Chief Information Officer approve the estimate, based on a briefing that includes information about the estimate such as the largest cost drivers, major risks, and the findings of the integrated baseline review, and that this approval is documented. For the three agencies that partially addressed this policy component (Agriculture, EPA, and Veterans Affairs), all required that estimated costs be presented to management, but none fully defined the information to be presented, such as the confidence level associated with the estimate. Lastly, neither Justice nor Commerce had departmental requirements for management review and approval of the cost estimate. Officials at both agencies stated that this responsibility had been delegated to their component agencies. However, without requiring management review and approval of program cost estimates at the department level, agencies have reduced ability to enforce cost- estimating policies and ensure that cost estimates meet management’s needs for reliable information about programs’ estimated costs. Training requirements: Only one agency—DOD—fully addressed this policy component by requiring cost-estimating training and enforcing this requirement. For example, DOD requires training in cost estimating via its Defense Acquisition Workforce Improvement Act certifications, among other things, for at least one staff member for each major program, as well as for personnel with investment oversight responsibility. While the two agencies that partially addressed this policy component (Homeland Security and Labor) provided cost-estimating training and had a mechanism to track participation, their policies did not address providing training to personnel with investment oversight responsibility, such as officials from Homeland Security who are responsible for reviewing and approving programs at key milestones in their life cycles. Among the five agencies whose policies did not address requiring and enforcing training in cost estimating (Agriculture, Commerce, EPA, Justice, and Veterans Affairs), four of these agencies referred to OMB’s Federal Acquisition Certification for Program and Project Managers as providing for such training. However, this certification program does not require classes on cost estimating, and furthermore, is not intended for nor provided to individuals with investment oversight responsibility. Additionally, officials at two of the five agencies— Commerce and Veterans Affairs—stated that training in cost estimating had not been viewed as a priority. Without requiring and enforcing training in cost estimating, agencies cannot effectively ensure that staff have the skills and knowledge necessary to prepare and use cost estimates to make reliable budget and program decisions. Central, independent cost-estimating team: Three of the eight agencies (DOD, Homeland Security, and Veterans Affairs) fully addressed this policy component by establishing central, independent cost-estimating teams, all of which have responsibility for, among other things, developing cost-estimating guidance and validating that program cost estimates are developed in accordance with best practices. In addition, among these three agencies, the teams established at DOD and Veterans Affairs are also charged with improving cost-estimating training. The remaining five agencies had not established a central, independent cost-estimating team. Among these, officials commonly cited the lack of a priority at the department or agency level for cost-estimating initiatives, although in one case a component agency at Agriculture—the Food Safety and Inspection Service—established its own centralized cost-estimating team. While this will likely enhance cost estimating at the component agency, not centralizing the cost-estimating function in the department could result in ad hoc processes and a lack of commonality in the estimating tools and training across the department. Additionally, officials from Labor stated they believe the department’s IT budget is too small to cost- effectively centralize the cost-estimating function; however, doing so would likely, among other things, facilitate a better sharing of resources and could be accomplished in a manner commensurate with agency size. Agencies that do not establish a central and independent cost-estimating team may lack the ability to improve the implementation of cost-estimating policies, support cost-estimating training, and validate the reliability of program cost estimates at the department or agency level. Standard structure for defining work products: DOD was the only agency to fully address this policy component by developing and requiring the use of standard, product-oriented work breakdown structures. Specifically, the agency provided multiple standard work breakdown structures, along with detailed guidance, for different types of programs (e.g., automated information systems, space systems, aircraft systems), and required their use. Three agencies—Homeland Security, Justice, and Veterans Affairs—partially addressed this policy component in that they provided one or more product-oriented work breakdown structures in their policies, but did not require programs to use them for cost estimating. Among these, Justice officials stated that a standard work breakdown structure was only required for their earned value management processes. Further, both Veterans Affairs and Homeland Security stated that they intend to require the use of a standard work breakdown structure in the future, but had not yet determined a time frame for establishing this requirement. Lastly, four of the selected agencies—Agriculture, Commerce, EPA, and Labor—had not established a standard structure. Among these, officials from Agriculture, EPA, and Labor stated that they believe it is difficult to standardize how programs define work products, in part, because their programs conduct different types of work and have different needs. While this presents a challenge, agencies could adopt an approach similar to DOD’s and develop various standard work structures based on the kinds of work being performed. Commerce officials stated that they plan to establish a standard structure for defining work products in the future, but have not yet determined a time frame for completing this. Without establishing a standard structure for defining work products, agencies will not be positioned to ensure that they can effectively compare programs and collect and share data among programs. Process to collect and store cost-related data: Only one agency— DOD—fully addressed this policy component by establishing a process to collect and store cost-related data. Specifically, the agency has a central repository for collecting actual costs, software data, and related business data, which serves as a resource to support cost estimating across the agency. Among the seven agencies that have not established a process for collecting and storing cost-related data, Homeland Security’s policy assigns responsibility for doing so to the central cost-estimating team; however, the team has not yet implemented the process. Additionally, Veterans Affairs officials stated that collecting such data would depend on the use of a standard structure for defining work products, which they have not yet put in place. Agriculture and Commerce officials stated that cost- estimating initiatives have not been a priority, although in one case a component agency at Commerce—the United States Patent and Trademark Office—took the initiative to establish a process to collect and store cost-related data from past estimates. While this should improve cost estimating at the component agency, without establishing an agencywide process to collect and store cost-related data, agencies will find it difficult to improve the data available to all programs and to increase the efficiency of developing cost estimates. Until the selected agencies address the identified weaknesses in their cost-estimating policies, it will be difficult for them to make effective use of program cost estimates for informed decision making, realistic budget formation, and meaningful progress measurement. A reliable cost estimate is critical to the success of any government acquisition program, as it provides the basis for informed investment decision making, realistic budget formulation and program resourcing, and meaningful progress measurement. According to OMB, programs must maintain current and well-documented cost estimates, and these estimates must encompass the full life cycle of the programs. In addition, our research has identified a number of best practices that provide a basis for effective program cost estimating and should result in reliable cost estimates that management can use for making informed decisions. These practices can be organized into four characteristics— comprehensive, well-documented, accurate, and credible. These four characteristics of a reliable cost estimate are explained in table 3. While all 16 major acquisition programs we reviewed had developed cost estimates and were using them to inform decision making, all but one of the estimates were not fully reliable and did not provide a sound basis for informed program and budget decisions. The 16 acquisition programs had developed cost estimates and were using their estimates, in part, to support program and budget decisions. For example, most programs used their cost estimate as the basis for key program decisions, such as approval to proceed to full production of a system. In addition, most programs were using their estimates as an input to their annual budget request process. However, nearly all of these programs had estimates that did not fully reflect important cost-estimating practices. Specifically, of the 16 case study programs, only 1 fully met all four characteristics of a reliable cost estimate, while the remaining 15 programs varied in the extent to which they met the four characteristics. Table 4 identifies the 16 case study programs and summarizes our results for these programs. Following the table is a summary of the programs’ implementation of cost-estimating practices. Additional details on the 16 case studies are provided in appendix II. Most programs partially implemented key practices needed to develop a comprehensive cost estimate. Specifically, of the 16 programs, 1 fully implemented the practices for establishing a comprehensive cost estimate, 12 partially implemented the practices, and 3 did not implement them. DOD’s Consolidated Afloat Networks and Enterprise Services program fully implemented key practices for developing a comprehensive cost estimate. Specifically, the program’s cost estimate included both the government and contractor costs for the program over its full life cycle, from inception through design, development, deployment, operation and maintenance, and retirement of the program. Further, the cost estimate reflected the current program and technical parameters, such as the acquisition strategy and physical characteristics of the system. In addition, the estimate clearly described how the various cost subelements were summed to produce the amounts for each cost category, thereby ensuring that all pertinent costs were included, and no costs were double counted. Lastly, cost-influencing ground rules and assumptions, such as the program’s schedule, labor rates, and inflation indexes, were documented. Twelve programs partially implemented key practices for developing a comprehensive cost estimate. Most of these programs fully identified cost-influencing ground rules and assumptions and included government and contractor costs for portions of the program life cycle. However, 10 of the 12 programs did not include the full costs for all life-cycle phases and other important aspects of the program, such as costs expected to be incurred by organizations outside of the acquiring program (e.g., by other agency subcomponents), all costs for operating and maintaining the system, and costs for the retirement of the system. Without fully accounting for all past, present, and future costs for every aspect of the program, regardless of funding source, the programs’ estimated costs are likely understated and thereby subject to underfunding and cost overruns. In addition, 10 of the 12 programs did not provide evidence that their cost estimates completely defined the program or reflected the current program schedule by documenting a technical baseline description to provide a common definition of the current program, including detailed technical, program, and schedule descriptions of the system. For example, in 2008, Homeland Security’s Rescue 21 program documented the system’s technical characteristics, along with a high- level schedule for the program. Since 2008, however, certain technical characteristics of the program had changed, such as additional deployment sites needed to address communication service gaps identified by local commanders at previously deployed locations. In addition, the planned deployment dates for several locations of the system had been delayed. As a result, the program’s cost estimate did not fully reflect the current scope and schedule of the program. Understanding the program—including the acquisition strategy, technical definition, characteristics, system design features, and technologies to be included—is critical to developing a reliable cost estimate. Without these data, programs will not be able to identify the technical and program parameters that bind the estimate. Three programs did not implement key practices for developing a comprehensive cost estimate in that their estimates did not adequately (1) include all costs over the program’s full life cycle; (2) completely define the program or the current schedule; (3) include a detailed, product-oriented work breakdown structure; and (4) document cost-influencing ground rules and assumptions. For example, the cost estimate for Veterans Affairs’ Health Data Repository program did not include sufficient detail to show that it accounted for all phases of the program’s life cycle (e.g., design, development, and deployment). Further, the estimate did not include important technical baseline information, including the technical, program, and schedule aspects of the system being estimated. Lastly, the estimate only used high-level budget codes rather than a detailed, product-oriented cost element structure to decompose the work, and ground rules and assumptions (e.g., labor rates and base-year dollars) were not documented. Without implementing key practices for developing comprehensive cost estimates, management and oversight organizations cannot be assured that a program’s estimate is complete and accounts for all possible costs, thus increasing the likelihood that the estimate is understated. The majority of programs partially implemented key practices needed to develop a well-documented cost estimate. Specifically, of the 16 programs, 1 fully implemented the practices for establishing a well- documented cost estimate, 10 partially implemented the practices, and 5 did not implement them. DOD’s Consolidated Afloat Networks and Enterprise Services program fully implemented key practices for developing a well- documented cost estimate. Specifically, the program’s cost estimate captured in writing the source data used (e.g., historical data and program documentation), the calculations performed and their results, and the estimating methodology used to derive each cost element. In addition, the program documented a technical baseline description that included, among other things, the relationships with other systems and planned performance parameters. Lastly, the cost estimate was reviewed both by the Naval Center for Cost Analysis and the Assistant Secretary of the Navy for Research, Development, and Acquisition, which helped ensure a level of confidence in the estimating process and the estimate produced. Ten programs partially implemented key practices for developing a well-documented cost estimate. Most of these programs included a limited description of source data and methodologies used for estimating costs, and documented management approval of the cost estimate. However, 9 of the 10 programs did not include complete documentation capturing source data used, the calculations performed and their results, and the estimating methodology used to derive each cost element. Among other things, the 9 programs had weaknesses in one or more of the following areas: relying on expert opinion but lacking historical data or other documentation to back up the opinions; not documenting their estimate in a way that a cost analyst unfamiliar with the program could understand what was done and replicate it; and lacking supporting data that could be easily updated to reflect actual costs or program changes. Without adequate documentation to support the cost estimate, questions about the approach or data used cannot be answered and the estimate may not be useful for updates or information sharing. In addition, 8 of the 10 programs did not provide management with sufficient information about how the estimate was developed in order to make an informed approval decision. For example, while the EPA’s Financial System Modernization Project’s cost estimate was approved, management was not provided information specific to how the estimate was developed, including enough detail to show whether it was accurate, complete, and high in quality. Because cost estimates should be reviewed and accepted by management on the basis of confidence in the estimating process and the estimate produced by the process, it is imperative that management understand how the estimate was developed, including the risks associated with the underlying data and methods, in making a decision to approve a cost estimate. Five programs did not implement key practices for developing a well- documented cost estimate in that their estimates did not adequately (1) include detailed documentation that described how the estimate was derived, (2) capture the estimating process in such a way that the estimate can be easily replicated and updated, (3) discuss the technical baseline description, and (4) provide evidence that the estimate was fully reviewed and accepted by management. In particular, three of the five programs relied on their budget submission documentation, known as the OMB Exhibit 300, as their life-cycle cost estimate. The cost estimate information included in these programs’ Exhibit 300 budget submissions was limited to the final estimates in certain phases of the program’s life cycle, such as planning, development, and operations and maintenance. Because a well-documented estimate includes detailed documentation of the source data, calculations and results, and explanations of why particular methods and references were chosen, the programs that relied on their Exhibit 300 budget submissions as their cost estimates lacked the level of rigor and supporting documentation necessary for a well-documented cost estimate. Without a well-documented estimate, a program’s credibility may suffer because the documentation cannot explain the rationale of the methodology or the calculations, a convincing argument of the estimate’s validity cannot be presented, and decision makers’ questions cannot be effectively answered. Most programs partially implemented or did not implement key practices needed to develop an accurate cost estimate. Specifically, of the 16 programs, 2 fully implemented the practices for establishing an accurate cost estimate, 8 partially implemented the practices, and 6 did not implement them. DOD’s Consolidated Afloat Networks and Enterprise Services and Homeland Security’s Integrated Public Alert and Warning System programs fully implemented key practices for developing an accurate cost estimate. Specifically, the programs’ estimates were based on an assessment of most likely costs, in part because a risk and uncertainty analysis was conducted to determine where the programs’ estimates fell against the range of all possible costs. In addition, the programs’ estimates were grounded in a historical record of cost estimating and actual experiences from comparable programs. For example, the cost estimate for the Integrated Public Alert and Warning System program relied, in part, on actual costs already incurred by the program as well as data from three comparable programs, including a legacy disaster management system. Moreover, the programs’ cost estimates were adjusted for inflation and updated regularly to reflect material changes in the programs, such as when the schedule changed. Eight programs partially implemented key practices for developing an accurate cost estimate. Most of these programs accounted for inflation when projecting future costs. However, four of the eight programs did not rely, or could not provide evidence of relying, on historical costs and actual experiences from comparable programs. For example, officials from Pension Benefit Guaranty Corporation’s Benefit Administration program stated that they relied on historical data along with expert opinion in projecting costs, but the officials did not provide evidence of the data sources or how the historical data were used. Because historical data can provide estimators with insight into actual costs on similar programs—including any cost growth that occurred in the original estimates—without documenting these data, these programs lacked an effective means to challenge optimistic assumptions and bring more realism to their estimates. In addition, six of the eight programs did not provide evidence that they had regularly updated their estimates to reflect material changes in the programs so that they accurately reflected the current status. For example, Justice’s Unified Financial Management System program developed a cost estimate in 2009; however, according to program documentation, program scope and projected costs have since changed and, as a result, the 2009 estimate no longer reflects the current program. Cost estimates that are not regularly updated with current information can make it more difficult to analyze changes in program costs, impede the collection of cost and technical data to support future estimates, and may not provide decision makers with accurate information for assessing alternative decisions. Six programs did not implement key practices for developing an accurate cost estimate in that their estimates were not adequately (1) based on an assessment of most likely costs, (2) grounded in historical data and actual experiences from comparable programs, (3) adjusted for inflation, and (4) updated to ensure that they always reflect the current status of the program. For example, the cost estimate for Agriculture’s Public Health Information System was not based on an assessment of most likely costs because a risk and uncertainty analysis was not conducted to determine where the estimate fell against the range of all possible costs. In addition, the estimate was based primarily on the program team’s expertise, but was not grounded in historical costs or actual experiences from comparable programs. Lastly, the estimate was not adjusted for inflation and lacked adequate detail to determine whether the program’s latest updates to the cost estimate, completed in 2011, accurately reflected the current status of the program. Without implementing key practices for developing an accurate cost estimate, a program’s estimate is more likely to be biased by optimism and subject to cost overruns, and may not provide management and oversight organizations with accurate information for making well- informed decisions. The majority of programs did not implement all key practices needed to develop a credible cost estimate. Specifically, of the 16 programs, 1 fully implemented the practices for establishing a credible cost estimate, 5 partially implemented the practices, and 10 did not implement them. DOD’s Consolidated Afloat Networks and Enterprise Services program fully implemented key practices for developing a credible cost estimate. Specifically, the program performed a complete uncertainty analysis (i.e., both a sensitivity analysis and Monte Carlo simulation) on the estimate. For example, in performing the sensitivity analysis, the program identified a range of possible costs based on varying key parameters, such as the technology refresh cycle and procurement costs. In addition, the program performed cross checks (using different estimating methods) on key cost drivers, such as system installation costs. Lastly, an independent cost estimate was conducted by the Naval Center for Cost Analysis and the results were reconciled with the program’s cost estimate, which increased the confidence in the credibility of the resulting estimate. Five programs partially implemented key practices for developing a credible cost estimate. Specifically, three of the five programs performed aspects of a sensitivity analysis, such as varying one or two assumptions to assess the impact on the estimate; however, these programs did not perform other important components, such as documenting the rationale for the changes to the assumptions or assessing the full impact of the changes to the assumptions by determining a range of possible costs. For example, the Pension Benefit Guaranty Corporation’s Benefits Administration program performed a sensitivity analysis by varying three program assumptions, one of which was the contractor’s hourly rate, to assess the impact on the cost estimate. However, the program did not provide evidence to support why the adjusted hourly labor rate was used nor apply a range of increases and decreases to the hourly labor rate to determine the level of sensitivity of this assumption on the cost estimate. A comprehensive sensitivity analysis that is well documented and traceable can provide programs with a better understanding of the variables that most affect the cost estimate and assist in identifying the cost elements that represent the highest risk. In addition, three of the five programs adjusted the cost estimate to account for risk and uncertainty, but did not provide evidence to support how costs were risk adjusted or determine the level of confidence associated with the cost estimate.Homeland Security’s Integrated Public Alert and Warning System program’s cost estimate did not include information on the risks considered in its risk and uncertainty analysis or consider the relationship between multiple cost elements when accounting for risks. Without conducting an adequate risk and uncertainty analysis, the cost estimate may be unrealistic because it does not fully reflect the aggregate variability from such effects as schedule slippage, mission changes, and proposed solutions not meeting users’ needs. Ten programs did not implement key practices for developing a credible cost estimate in that the programs did not adequately (1) assess the uncertainty or bias surrounding data and assumptions by conducting a sensitivity analysis, (2) determine the level of risk associated with the estimate by performing a risk and uncertainty analysis, (3) cross-check the estimates for key cost drivers, and (4) commission an independent cost estimate to be conducted by a group outside the acquiring organization to determine whether other estimating methods would produce similar results. For example, Agriculture’s Web-Based Supply Chain Management program did not conduct a sensitivity analysis to better understand which variables most affected the cost estimate, nor did the program conduct a risk and uncertainty analysis to quantify the impact of risks on the estimate. Further, cost drivers were not cross-checked to see if different estimating methodologies produced similar results, and an independent cost estimate was not conducted to independently validate the results of the program’s estimate. Without implementing key practices for developing a credible cost estimate, a program may lack an understanding of the limitations associated with the cost estimate and be unprepared to deal with unexpected contingencies. The lack of reliable cost estimates across the investments exists in part because of the weaknesses previously identified in the eight agencies’ cost-estimating policies. More specifically, program officials at five agencies—Agriculture, Commerce, EPA, Justice, and Veterans Affairs— attributed weaknesses in their programs’ cost estimates, in part, to the fact that agency policies did not require cost-estimating best practices— deficiencies which we also identified in these agencies’ policies. For example, officials at Commerce’s Comprehensive Large Array-data Stewardship System program stated that, when the program developed its cost estimate, no agency guidance existed regarding the process to follow in developing the estimate. In addition, officials at Veterans Affairs’ Veteran’s Benefits Management System program stated that they did not perform a risk analysis on their cost estimate because agency guidance on how such an analysis should be performed did not exist. In certain cases, officials stated that program cost estimates were initially developed prior to 2007, when a comprehensive federal resource for cost-estimating best practices, such as GAO’s cost guide, did not exist. However, all 16 programs included in our review have either developed new estimates or updated previous estimates since 2007; nonetheless, as previously mentioned, most of the selected agencies’ policies did not fully address compliance with cost-estimating best practices, including the five agencies mentioned above. If these agencies had updated their policies, programs would have been more likely to follow a standard, high-quality process in developing or updating their cost estimates. Until important cost-estimating practices are fully implemented, the likelihood that these programs will have to revise their current cost estimates upward is increased. Collectively, 13 of the 16 programs have already revised their original life-cycle cost estimates upward by almost $5 billion due, in part, to weaknesses in program cost-estimating practices (see app. III for details on changes in the programs’ cost estimates over time). For example, in many cases, cost estimates had to be revised upwards to reflect the incorporation of full costs for all life-cycle phases (e.g., development or operations and maintenance), which had not originally been included. This resulted, in some cases, in significant increases to estimated life-cycle costs. Other reasons that programs cited for revising their life-cycle cost estimates upward included changes to program or system requirements, schedule delays, technology upgrades, and system defects, among other things. Further, as previously mentioned, 13 of the 16 case study programs still have cost estimates that do not include the full costs for all life-cycle phases, which significantly increases the risk that these programs’ cost estimates will continue to be revised upward in the future. Without reliable cost estimates, the 15 programs that did not fully meet best practices will not have a sound basis for informed program decision making, realistic budget formulation and program resourcing, and meaningful progress measurement. Consequently, nearly all of these programs’ cost estimates may continue to be understated and subject to underfunding and cost overruns. Given the enormous size of the federal government’s investment in IT, it is critical that such investments are based on reliable estimates of program costs. While all of the selected agencies have established policies that at least partially addressed a requirement for programs to develop full life-cycle cost estimates, most of the agencies’ policies have significant weaknesses. With the exception of DOD, these policies omit or lack sufficient guidance on several key components of a comprehensive policy including, for example, management review and acceptance of program cost estimates, the type of work structure needed to effectively estimate costs, and training requirements for all relevant personnel. Without comprehensive policies, agencies may not have a sound basis for making decisions on how to most effectively manage their portfolios of projects. Most programs’ estimates at least partially reflected cost-estimating best practices, such as documenting cost-influencing ground rules and assumptions; however, with the exception of DOD’s Consolidated Afloat Networks and Enterprise Services program, the programs we reviewed had not established fully reliable cost estimates, increasing the likelihood that the estimates are incomplete and do not account for all possible costs. For example, without including costs for all phases of a program’s life cycle and performing a comprehensive risk and uncertainty analysis, a program’s estimated costs could be understated and subject to underfunding and cost overruns, putting it at risk of being reduced in scope or requiring additional funding to meet its objectives. Many of the weaknesses found in these programs can be traced back to inadequate agency cost-estimating policies. Without better estimates of acquisition life-cycle costs, neither the programs nor the agencies have reliable information for supporting program and budget decisions. Consequently, the likelihood of cost overruns, missed deadlines, and performance shortfalls is significantly increased. To address weaknesses identified in agencies’ policies and practices for cost estimating, we are making the following recommendations: We recommend that the Secretaries of Agriculture, Commerce, Homeland Security, Labor, and Veterans Affairs, the Attorney General, and the Administrator of the Environmental Protection Agency direct responsible officials to modify policies governing cost estimating to ensure that they address the weaknesses that we identified. We also recommend that the Secretaries of Agriculture, Commerce, Homeland Security, Labor, and Veterans Affairs, the Attorney General, the Administrator of the Environmental Protection Agency, and the Director of the Pension Benefit Guaranty Corporation direct responsible officials to update future life-cycle cost estimates of the system acquisition programs discussed in this report using cost-estimating practices that address the detailed weaknesses that we identified. Lastly, although DOD fully addressed the components of an effective cost-estimating policy, in order to address the weaknesses we identified with a key system acquisition discussed in this report, we recommend that the Secretary of Defense direct responsible officials to update future life-cycle cost estimates of the Tactical Mission Command program using cost-estimating practices that address the detailed weaknesses that we identified. We provided the selected eight agencies and the Pension Benefit Guaranty Corporation with a draft of our report for review and comment. A management analyst in the Department of Justice’s Internal Review and Evaluation Office, Justice Management Division, responded orally that the department had no comments. Six of the agencies and the Pension Benefit Guaranty Corporation provided written comments, and the Department of Labor provided oral and written comments. These agencies generally agreed with our results and recommendations, although EPA disagreed with our assessment of the cost-estimating practices used for one of its programs. These agencies also provided technical comments, which we incorporated in the report as appropriate. The comments of the agencies and the corporation are summarized below: The U.S. Department of Agriculture’s Acting Chief Information Officer stated that the department concurred with the content of the report. Agriculture’s comments are reprinted in appendix IV. The Acting Secretary of Commerce stated that the department fully concurred with our findings and recommendations. Among other things, the Acting Secretary described a number of ongoing actions to address the weaknesses we identified, such as modifying departmental policies governing cost estimating to include an additional cost-estimating training course and cost-estimating training requirements. In addition, the department stated that forthcoming policy and guidance are intended to ensure that the cost estimates for high-profile programs are comprehensive, accurate, credible, and well-documented. Commerce’s comments are reprinted in appendix V. DOD’s Director of Cost Assessment and Program Evaluation stated that the department partially concurred with our recommendation but agreed with the criteria, methodology, and assessment of the DOD programs. The director added, however, that there is no plan to formally update the Tactical Mission Command life-cycle cost estimate, as the program is in the system deployment phase of its acquisition lifecycle. We recognize that the programs included in our study are at varying stages of their acquisition life cycles and that updates to their cost estimates may not be justified. Accordingly, our recommendation to DOD is specific to only future life-cycle cost estimates. In this regard, if any significant changes occur in the program during deployment of the system that warrant an update to the cost estimate, it will be important that the program uses best practices that address the weaknesses we identified. DOD’s comments are reprinted in appendix VI. EPA’s Assistant Administrator of the Office of Solid Waste and Emergency Response and its Assistant Administrator and Chief Information Officer of the Office of Environmental Information stated, in regard to our assessment of cost-estimating policies, that EPA recognized that its policies did not require cost-estimating best practices and that the agency will update its Systems Life Cycle Management procedures accordingly. The officials acknowledged that sound fiscal management practices should be followed in all aspects of the agency’s information technology operations, including cost estimating for the development of new systems. In regard to our assessment of cost-estimating practices for two system acquisition programs, EPA stated that it did not have any comments on our assessment of the Financial System Modernization Project; however, it did not believe our assessment accurately reflected the cost-estimating practices employed for the development of the Superfund Enterprise Management System. In particular, the Office of Solid Waste and Emergency Response stated in its written response and in technical comments that it believed it had met the spirit and intent of the cost-estimating best practices in GAO’s cost guide, even though the program may have used different processes or documentation in order to do so. We recognize and agree that organizations should tailor the use of the cost-estimating best practices as appropriate based on, for example, the development approach being used, and we took this factor into consideration during our review of the 16 acquisition programs. However, we stand by our assessment of the Superfund Enterprise Management System program’s cost estimate on the basis of the weaknesses described in appendix II of this report. In particular, as we discuss, the program’s cost estimate lacked key supporting documentation, including costs not documented at a sufficient level of detail; the lack of documented source data, calculations, and methodologies used to develop the estimate; and a lack of documentation on the source of and rationale for the inflation factor used. In addition, the lack of detailed cost- estimate information precluded us from making the linkage between the cost estimate and other important program documents, such as the system’s technical baseline and schedule, in order to determine whether the estimate reflects the current program and status. Because rigorous documentation is essential for justifying how an estimate was developed and for presenting a convincing argument for an estimate’s validity, weaknesses in this area contributed significantly to weaknesses across multiple best practices areas, including the estimate’s comprehensiveness and accuracy. Further, regarding the Office of Solid Waste and Emergency Response’s comment that our cost-estimating guide was not published until 3 years after development of the Superfund Enterprise Management System commenced, we disagree that this would preclude the program from satisfying cost-estimating best practices. Specifically, the program updated its cost estimate in 2011, 2 years after the issuance of the GAO cost guide. At that time, the program could have revised its cost estimate using available best practice guidance. Lastly, we disagree that the draft report erroneously concluded that the Superfund Enterprise Management System cost estimate increased from $39.3 million to $62.0 million in just 2 years. In its written response, the Office of Solid Waste and Emergency Response stated that the revised cost estimate was a direct result of an increase in the duration of operations and maintenance from fiscal year 2013 (in the $39.3 million estimate) to fiscal year 2017 (in the $62.0 million estimate). However, according to documentation provided by the Superfund Enterprise Management System program, the $39.3 million estimate, which was completed in 2009, was based on a 10-year life cycle (from fiscal year 2007 to fiscal year 2017) and included costs for operations and maintenance through fiscal year 2017. Subsequently, in 2011, the program revised its estimate to approximately $62.0 million, which was also based on a 10-year life cycle (from fiscal year 2007 to fiscal year 2017) and included operations and maintenance costs through 2017. The revised estimate is an increase of about $22.7 million over the initial estimate. According to program documentation, this change in the cost estimate was primarily due to the inclusion of additional operations and maintenance costs for data and content storage and hosting for the fully integrated system between fiscal year 2014 and fiscal year 2017, which were erroneously omitted from the 2009 estimate. Based on these factors, we maintain that our report reflects this information appropriately. EPA’s comments are reprinted in appendix VII. The Department of Homeland Security’s Director of the Departmental GAO-Office of the Inspector General Liaison Office stated that the department concurred with our recommendations. Among other things, the department stated that its Office of Program Accountability and Risk Management intends to develop a revised cost-estimating policy that will further incorporate cost-estimating best practices, as well as work to provide cost-estimating training to personnel on major programs throughout the department. Homeland Security’s comments are reprinted in appendix VIII. In oral comments, the Administrative Officer in the Department of Labor’s Office of the Assistant Secretary for Administration and Management stated that the department generally agreed with our recommendations. Further, in written comments, the Assistant Secretary for Administration and Management stated that the department, through several initiatives, such as its Post Implementation Review process and training to IT managers, will continue to improve upon its IT cost estimation. The department also commented on certain findings in our draft report. In particular, the Assistant Secretary stated that, given the department’s relatively small IT portfolio, establishing a central, independent office dedicated to cost estimating is not justified. We recognize that agency IT portfolios vary in size; however, as noted in our report, agencies should establish a central cost-estimating team commensurate with the size of their agency, which could consist of a few resident experts instead of a full independent office. Regarding our second recommendation, according to the Assistant Secretary, the Occupational Safety and Health Administration (OSHA) stated that it believes our assessment of the credibility of the OSHA Information System program’s 2010 cost estimate was too low and did not reflect additional information provided in support of the program’s 2008 cost estimate. In our assessment of the program’s 2010 estimate we acknowledge evidence provided from the 2008 estimate; however, this evidence did not adequately show that important practices for ensuring an estimate’s credibility, including making adjustments to account for risk and conducting a sensitivity analysis, were performed on the 2010 cost estimate. In addition, OSHA stated that an independent estimate was conducted at the outset of the program by an industry-leading IT consulting firm as recommended by the Department of Labor Office of the Inspector General. While we acknowledge that this was done in 2005, the resulting estimate was the only one developed at the time and thus was not used as a means of independent validation—i.e., to determine whether multiple estimating methods produced similar results. Therefore, the independent estimate conducted in 2005 would not increase the credibility of the program’s current cost estimate. Labor’s comments are reprinted in appendix IX. The Director of the Pension Benefit Guaranty Corporation stated that the corporation was pleased that its selected IT investment met at least half, or a large portion, of our quality indicators for cost estimating. Further, the Director stated that the corporation will evaluate and improve future life-cycle cost estimates for the Benefit Administration investment. The Pension Benefit Guaranty Corporation’s comments are reprinted in appendix X. The Chief of Staff for the Department of Veterans Affairs stated that the department concurred with our recommendations and has efforts under way to improve its cost-estimating capabilities. Among other things, the Chief of Staff stated that the department plans to complete, by the end of the first quarter of fiscal year 2013, an evaluation of the utility of establishing an organizational function focused solely on multiyear cost estimation. In addition, to improve cost-estimating practices on its IT efforts, the department stated that it has additional training planned in early fiscal year 2013. Veterans Affairs’ comments are reprinted in appendix XI. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretaries of Agriculture, Commerce, Defense, Homeland Security, Labor, and Veterans Affairs; the Attorney General; the Administrator of the Environmental Protection Agency; the Director of the Pension Benefit Guaranty Corporation; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6304 or by e-mail at melvinv@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. Key contributors to this report are listed in appendix XII. Our objectives were to (1) assess the extent to which selected departments and agencies have appropriately implemented cost- estimating policies and procedures, and (2) evaluate whether selected information technology (IT) investments at these departments and agencies have reliable cost estimates to support budget and program decisions. For this review, we assessed eight federal agencies and 16 investments. To select these agencies and investments, we relied on the Office of Management and Budget’s Fiscal Year 2010 Exhibit 53 which, at the time we made our selections, contained the most current and complete data on 28 agencies’ planned IT spending. To ensure that we selected agencies with varying levels of spending on IT, we sorted them into three ranges based on their planned spending in fiscal year 2010: greater than or equal to $10 billion; greater than or equal to $1 billion but less than $10 billion; and greater than $0, but less than $1 billion. The number of agencies selected from each range was based on the relative number of IT investments within each range, and the specific agencies selected were those with the highest amount of planned IT spending in fiscal year 2010. Specifically, we selected one agency with greater than $10 billion in planned IT spending,between $1 billion and $10 billion in planned spending, and two agencies with less than $1 billion in planned spending. In doing so, we limited our selections to those agencies at which we could identify two investments that met our selection criteria for investments (see the following paragraph for a discussion of our investment selection methodology). These agencies were the Departments of Agriculture, Commerce, Defense, Homeland Security, Justice, Labor, and Veterans Affairs, and the Environmental Protection Agency. We excluded the Departments of Education, Health and Human Services, and the Treasury, and the General Services Administration from our selection, even though they initially met our agency selection criteria, because we could not identify two investments at these agencies that met our investment selection criteria. The Office of Management and Budget defines a major IT investment as a system or an acquisition requiring special management attention because it has significant importance to the mission or function of the agency, a component of the agency, or another organization; is for financial management and obligates more than $500,000 annually; has significant program or policy implications; has high executive visibility; has high development, operating, or maintenance costs; is funded through other than direct appropriations; or is defined as major by the agency’s capital planning and investment control process. primarily an infrastructure investment, had a high percentage of steady- state spending versus development spending, had less than $5 million in planned spending for fiscal year 2010, or were the subjects of recent or ongoing GAO audit work. To assess the extent to which selected agencies had appropriately implemented cost-estimating policies and procedures, we analyzed agency policies and guidance for cost estimating. Specifically, we compared these policies and guidance documents to best practices recognized within the federal government and private industry for cost estimating. These best practices are contained in the GAO Cost Guide and include, for example, establishing a clear requirement for cost estimating, requiring management review and approval of cost estimates, and requiring and enforcing training in cost estimating. For each policy component, we assessed it as either being not met—the agency did not provide evidence that it addressed the policy component or provided evidence that it minimally addressed the policy component; partially met—the agency provided evidence that it addressed about half or a large portion of the policy component; or fully met—the agency provided evidence that it fully addressed the policy component. We also interviewed key agency officials to obtain information on their ongoing and future cost-estimating plans. GAO, GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs, GAO-09-3SP (Washington, D.C.: March 2009). implemented the practices; partially met—the program provided evidence that it implemented about half or a large portion of the practices; or fully met—the program provided evidence that it fully implemented the practices. We then summarized these assessments by characteristic. We also interviewed program officials to obtain clarification on how cost- estimating practices are implemented and how the cost estimates are used to support budget and program decisions. We conducted this performance audit from July 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted case studies of 16 major system acquisition programs (listed in table 5). For each of these programs, the remaining sections of this appendix provide the following: a brief description of the program and its life-cycle cost estimate, and an assessment of the program’s cost estimate against the four characteristics of a reliable cost estimate— comprehensive, well-documented, accurate, and credible. The key below defines “fully met,” “partially met,” and “not met” as assessments of programs’ implementation of cost-estimating best practices. The program provided evidence that it fully implemented the cost- estimating practices. The program provided evidence that it implemented about half or a large portion of the cost-estimating practices. The program did not provide evidence that it implemented the practices or provided evidence that it only minimally implemented the cost-estimating practices. The Public Health Information System (PHIS) program is designed to modernize the Food Safety and Inspection Service’s systems for ensuring the safety of meat, poultry, and egg products. According to the agency, the current systems environment includes multiple, disparate legacy systems that do not effectively support agency operations. PHIS is intended to replace these legacy systems with a single, web-based system that addresses major business areas such as domestic inspection, import inspection, and export inspection. The program intends to implement functionality to support domestic inspection and import inspection in 2012, and export inspection in 2013. In 2007, PHIS was a development contract within the larger Public Health Information Consolidation Projects investment. In 2011, after PHIS was separated out as its own major investment and the program was rebaselined, the PHIS program developed its own cost estimate of $82.3 million. This includes $71.4 million for development and $10.9 million for operations and maintenance over a 12-year life cycle. The PHIS program’s current cost estimate does not exhibit all of the qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a comprehensive estimate, it does not reflect key practices for developing a well-documented, accurate, or credible estimate. Table 6 provides details on our assessment of the PHIS program’s cost estimate. The Web-Based Supply Chain Management (WBSCM) program is designed to modernize the U.S. Department of Agriculture’s commodity management operations, including the purchasing and distribution of approximately $2.5 billion in food products for distribution to needy recipients through domestic and foreign food programs. To accomplish this, the program is replacing a legacy system with a web-based commercial-off-the-shelf solution. In 2010, the program achieved full operational capability. Ongoing efforts are focused on addressing a significant number of system defects identified since deployment. In 2003, WBSCM developed an initial cost estimate of $142.9 million. This included $105.5 million for development and $37.4 million for operations and maintenance over a 7-year life cycle. Subsequently, after revising the estimate each year as part the program’s Office of Management and Budget Exhibit 300 submission, in 2011, WBSCM revised its cost estimate to $378.4 million, an increase of about $235.5 million over its initial cost estimate. This includes $104.9 million for development and $273.5 million for operations and maintenance over a 18-year life cycle. These changes are due to, among other things, incorporating additional years of operations and maintenance costs, a recently planned system upgrade, and additional costs associated with addressing system defects. The WBSCM program’s current cost estimate does not exhibit any of the qualities of a reliable cost estimate. Specifically, the estimate did not reflect key practices for developing a comprehensive, well-documented, accurate, or credible estimate. Table 7 provides details on our assessment of the WBSCM program’s cost estimate. The Comprehensive Large Array-data Stewardship System (CLASS) is designed to provide environmental data archiving and access. The National Atmospheric and Oceanic Administration has been acquiring these data for more than 30 years, from a variety of observing systems throughout the agency and from a number of its partners. Currently, large portions of the nation’s environmental data are stored and maintained in disparate systems, with nonstandard archive and access capabilities. With significant increases expected in both the data volume and the number and sophistication of users over the next 15 years, CLASS is intended to provide a standard, integrated solution for environmental data archiving and access managed at the enterprise level. CLASS is currently developing satellite data archiving and access capabilities for several satellite programs, including the next generation of geostationary satellites—known as the Geostationary Operational Environmental Satellites-R Series, which are planned for launch beginning in 2015. In 2006, the National Oceanic and Atmospheric Administration developed the initial CLASS cost estimate of approximately $195.5 million. This included $118.3 million for development and $77.2 million for operations and maintenance over a 9-year life cycle. Subsequently, after revising the cost estimate three times, in 2011, CLASS established its current cost estimate of approximately $240.0 million, an increase of about $44.5 million over its initial cost estimate. This includes $176.0 million for development and $64.0 million for operations and maintenance over a 17- year life cycle. CLASS program officials stated that the increase in the estimate was due, in part, to additional data archiving requirements and external program delays. The CLASS program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a comprehensive estimate, it does not reflect key practices for developing a well-documented, accurate, or credible estimate. Table 8 provides details on our assessment of CLASS program’s cost estimate. The Patents End-to-End: Software Engineering (PE2E-SE) program is designed to provide a fully electronic patent application process. According to the U.S. Patent and Trademark Office, the agency’s current enterprise architecture is unable to meet current demands, and it has relied on inefficient and outdated automated legacy systems that inhibit the timely examination of patent applications. PE2E-SE intends to provide an electronic filing and processing application that enables examiners to meet current needs for the timely examination of patents. To accomplish this, PE2E-SE is following an Agile development approach and intends to implement a system using a text-based eXtensible Markup Language standard that is flexible, scalable, and leverages modern technologies with open standards. In fiscal year 2012, the program plans to build new functionality, such as new text search tools, and deploy the system to a limited set of examiners. In 2010, PE2E-SE developed an initial cost estimate of $130.2 million. This estimate only included costs for development, over a 3-year life cycle. Subsequently, in 2012 and after multiple revisions, PE2E-SE revised its cost estimate to $188.2 million, an increase of $58.0 million. This includes $122.8 million for development and $65.4 million for operations and maintenance over a 7-year life cycle. According to program officials, these changes are primarily due to incorporating costs for operations and maintenance into the estimate. The PE2E-SE program’s current cost estimate does not exhibit all of the qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a comprehensive, well- documented, and accurate estimate, it does not reflect key practices for developing a credible estimate. Table 9 provides details on our assessment of the PE2E-SE program’s cost estimate. The Consolidated Afloat Networks and Enterprise Services (CANES) program is designed to consolidate and standardize the Department of the Navy’s existing network infrastructures and services. According to the department, the current network infrastructure is highly segmented and includes several legacy environments that have created inefficiencies in the management and support of shipboard networks. The CANES program is intended to, among other things, reduce and eliminate existing standalone afloat networks, provide a technology platform that can rapidly adjust to changing warfighting requirements, and reduce the shipboard hardware footprint. To accomplish this, the program will rely primarily on commercial off-the-shelf software integrated with network infrastructure hardware components. The CANES program is currently planning to procure and conduct preinstallation activities of four limited fielding units by the end of fiscal year 2012, and achieve full operational capability in 2023. In 2010, the Navy’s Space and Naval Warfare Systems Command Cost Analysis Division developed a program life-cycle cost estimate for the CANES program, and the Naval Center for Cost Analysis developed an independent cost estimate. Subsequently, these organizations worked collaboratively to develop the program’s life-cycle cost estimate of approximately $12.7 billion. This included approximately $4.0 billion for development and approximately $8.8 billion for operations and maintenance over a 23-year life cycle. The CANES program’s cost estimate exhibits all of the qualities of a reliable cost estimate. Specifically, the estimate reflects key practices for developing a comprehensive, well-documented, accurate, and credible estimate. Table 10 provides details on our assessment of the CANES program’s cost estimate. The Tactical Mission Command (TMC)battle command system for commanders and staffs from battalions through the Army Service Component Commands. TMC is intended to provide commanders and staff with improved battle command capabilities, including increasing the speed and quality of command decisions. In the near term, TMC is to address gaps in the Army’s tactical battle command capability by delivering enhanced collaborative tools and enterprise services, and, in the long term, TMC is to address rapid improvements in technological capabilities through technology refresh. A key component—known as the Command Post of the Future—is intended to provide commanders and key staff with an executive-level decision support capability enhanced with real-time collaborative tools. These capabilities are expected to enhance situational awareness and support an execution-focused battle command process. Currently, the program is working to complete development of Command Post of the Future 7.0, which the program plans to complete by the end of fiscal year 2012. is designed to be the tactical In 2008, the TMC program developed an initial cost estimate of approximately $2.0 billion. This included approximately $1.9 billion for development and $116.5 million for maintenance over a 14-year life cycle. According to program officials, each subsequent year, in preparation for the annual Weapons System Review, the program updated its life-cycle cost estimate. In 2011 the TMC program established its current cost estimate of approximately $2.7 billion, an increase of approximately $723 million over its initial cost estimate. This included approximately $2.0 billion for development and $650.7 million for operations and maintenance over a 23-year life cycle. Program officials stated that the increase in the estimate was due, in part, to changes in the life-cycle time frames, fielding schedules, number of units planned for deployment, and other software development changes. The TMC program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a comprehensive, well-documented, and accurate estimate, it does not reflect key practices for developing a credible estimate. Table 11 provides details on our assessment of TMC program’s cost estimate. The Financial System Modernization Project (FSMP) replaced the Environmental Protection Agency’s legacy core financial system. The system is intended to address agency-identified shortcomings in its previous financial systems, such as inconsistent data, limited system interoperability, low system usability, and costly maintenance. FSMP includes key functionality for performing cost and project management, general ledger, payment management, and receivables management. According to the agency, the system is intended to, among other things, eliminate repetitive data entry, integrate legacy systems, and enable agency staff to manage workflow among the Office of the Chief Financial Officer and between other business lines (e.g., acquisitions and grants management). The system was deployed in October 2011. In 2005, the FSMP program developed an initial cost estimate of approximately $163.2 million. This included $42.8 million for development and $120.4 million for operations and maintenance over a 25-year life cycle. After revising the cost estimate three times, in 2010 the program established its current cost estimate of approximately $169.3 million, an increase of approximately $6 million over its initial cost estimate. This includes $103.7 million for development and $65.7 million for operations and maintenance over a 15-year life cycle. Program officials stated that the changes to the program’s life-cycle cost estimate are due, in part, to changes in the Environmental Protection Agency’s policies and guidance, such as using a 15-year program life cycle instead of the 25-year life cycle used in the program’s original estimate. In addition, officials stated that the FSMP program has undergone significant schedule and scope changes, including delaying the system’s deployment date from 2008 to 2011 and reducing in the planned system components (e.g., budget formulation)—all of which have impacted the program’s life-cycle cost estimate. The FSMP program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a comprehensive, well-documented, and accurate estimate, it does not reflect key practices for developing a credible estimate. Table 12 provides details on our assessment of FSMP program’s cost estimate. The Superfund Enterprise Management System (SEMS) is to replace three legacy systems and multiple applications used to comply with the Comprehensive Environmental Response, Compensation, and Liability Act of 1980—commonly known as Superfund, which provides federal authority to respond directly to releases or threatened releases of hazardous substances that may endanger public health or the environment. In addition, SEMS is designed to implement innovative software tools that will allow for more efficient operation of the Superfund program. Of the three legacy systems expected to be replaced by SEMS, two have already been integrated, and the one remaining system is expected to be fully integrated in 2013, at which time SEMS is planned to achieve full operational capability. In 2009, the SEMS program developed an initial cost estimate of approximately $39.3 million. This included $20.8 million for development, $14.7 million for operations and maintenance, and $3.8 million for government personnel costs over a 10-year life cycle. Subsequently, in 2011, the program revised its estimate to approximately $62.0 million, an increase of about $22.7 million over its initial cost estimate. This includes $22.8 million for development and $39.2 million for operations and maintenance over a 10-year life cycle. Program officials stated that the increase in the estimate was primarily due to incorporating additional operations and maintenance costs that were erroneously omitted from the initial estimate. The SEMS program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a credible estimate, it does not reflect key practices for developing a comprehensive, well-documented, or accurate estimate. Table 13 provides details on our assessment of SEMS program’s cost estimate. The Integrated Public Alert and Warning System (IPAWS) is designed to provide a reliable, integrated, and comprehensive system to alert and warn the American people before, during, and after disasters. To accomplish this, the program is developing the capability to disseminate national alerts to cellular phones and expanding the existing Emergency Alert System to cover 90 percent of the American public. In 2011, IPAWS established standards for alert messages, began cellular carrier testing, and conducted a nationwide test of the expanded Emergency Alert System capabilities. The program intends to deploy the cellular alerting capability nationwide in 2012 and complete its expansion of the Emergency Alert System in 2017. In 2009, IPAWS developed its initial estimate of $259 million, which included $252.1 million for development and $6.9 million for government personnel costs, but did not include operations and maintenance costs. In 2011, the program revised its estimate to $311.4 million, an increase of about $52.3 million. This includes $268.9 million for development and $42.5 million for operations and maintenance over an 11-year life cycle. According to program officials, the increase in the cost estimate is primarily due to the inclusion of costs to operate and maintain the system during development. The IPAWS program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, while the estimate fully reflects key practices for developing an accurate estimate, it only partially reflects key practices for developing a comprehensive, well-documented, and credible estimate. Table 14 provides details on our assessment of IPAWS program’s cost estimate. Rescue 21 is designed to modernize the U.S. Coast Guard’s maritime search and rescue capability. According to the agency, the current system—the National Distress and Response System, does not meet the demands of the 21st century in that it does not provide complete coverage of the continental United States, cannot receive distress calls during certain transmissions, lacks interoperability with other government agencies, and is supported by outdated equipment. Rescue 21 is intended to provide a modernized maritime distress and response communications system, with increased maritime homeland security capabilities that encompass coastlines, navigable rivers, and waterways in the continental United States, in addition to Hawaii, Guam, and Puerto Rico. Rescue 21 is currently undergoing regional deployment, which is planned to be completed in fiscal year 2017. In 1999, the Rescue 21 program developed an initial cost estimate of $250 million for acquisition of the system, but this estimate did not include any costs for operations and maintenance of the system. Following three rebaselines, in 2006 the Rescue 21 program revised the estimate to $1.44 billion, an increase of approximately $1.19 billion over the initial estimate. This included $730 million in development and $707 million in operations and maintenance over a 16-year life cycle. According to program documentation, these increases were due, in part, to incorporating costs for the operation and maintenance of the system. Subsequently, in 2008, the Rescue 21 program revised its cost estimate again to $2.66 billion, an increase of approximately $1.22 billion over the previous estimate, and approximately $2.41 billion over the initial cost estimate. This includes $1.07 billion in development and $1.59 billion in operations and maintenance over a 16-year life cycle. Program officials stated that the most recent increase in the cost estimate was primarily due to schedule delays, an extension of the program’s life cycle by 6 years based on an expected increase in the system’s useful life, and to reflect more realistic estimates of future costs for ongoing system technology refreshment. The Rescue 21 program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, the estimate partially reflects key practices for developing a comprehensive, well-documented, accurate, and credible estimate. Table 15 provides details on our assessment of the Rescue 21 program’s cost estimate. Since 1998, the Combined DNA Index System (CODIS) has supported the Federal Bureau of Investigation’s mission by assisting criminal investigation and surveillance through DNA collection and examination capabilities. CODIS is an automated DNA information processing and telecommunications system that generates potential investigative leads in cases where biological evidence is recovered. Among other things, CODIS links crime scene evidence to other crimes and/or offenders, which can identify serial offenders and/or potential suspects. CODIS serves over 190 participating laboratories and 73 international laboratories representing 38 countries. According to the Federal Bureau of Investigation, the reliability and expandability of CODIS are critical to the agency’s ability to effectively aid law enforcement investigations through the use of biometrics, prompting the decision in 2006 to initiate a modernization effort, referred to as Next Generation CODIS (NGCODIS). In 2011, the program achieved full operational capability for CODIS 7.0, a software release of NGCODIS, which included functionality for, among other things, implementing a software solution to comply with European Union legislation for DNA data exchange and maintaining DNA records of arrested persons. Additional functionality is expected in the future; however, all program development has been put on hold until the necessary funding is approved. In 2006, the CODIS program developed an initial cost estimate for NGCODIS of $128.4 million. This included approximately $69.6 million for development and $58.8 million for operations and maintenance over an 11-year life cycle. In 2009, the CODIS program developed an additional cost estimate of $58.6 million to account for operations costs associated with certain versions of NGCODIS. According to program officials, even though the program estimated additional operations costs of $58.6 million, the program’s original cost estimate has increased by only $8.6 million because originally planned development work related to incorporating advancements in DNA technology was delayed and the costs associated with this work were removed from the cost estimate. The CODIS program’s current cost estimate for NGCODIS does not exhibit all qualities of a reliable cost estimate. Specifically, the estimate partially reflects key practices for developing a comprehensive, well- documented, accurate, and credible estimate. Table 16 provides details on our assessment of the NGCODIS cost estimate. The Unified Financial Management System (UFMS) is to modernize the Department of Justice’s financial management and procurement operations. To accomplish this, UFMS is to replace four legacy core accounting systems and multiple procurement systems with a commercial off-the-shelf product. Ultimately, the system is expected to streamline and standardize financial management and procurement processes and procedures across the department’s component agencies. UFMS was deployed to two component agencies—the Drug Enforcement Administration and the Bureau of Alcohol, Tobacco, Firearms, and Explosives—in fiscal years 2009 and 2011, respectively. The system is planned to be deployed at other component agencies, including the U.S. Marshals Service and the Federal Bureau of Investigation, between fiscal years 2013 and 2014, and is expected to achieve full operational capability in fiscal year 2014. In 2002, the UFMS program developed an initial cost estimate of $357.2 million. This included approximately $196.4 million for development and $160.8 million for maintenance over a 10-year life cycle. In 2009, the UFMS program revised the estimate to $1.05 billion, an increase of approximately $692.8 million. This included $469.5 million for development and $581.6 million for operations and maintenance over a 20-year life cycle. Program officials stated that the increase in the estimate was due to extending the program’s life cycle to include additional years of development work and operations and maintenance of the system. Subsequently, in 2011, the program revised its cost estimate to $851.1 million, a decrease of approximately $198.9 million. This estimate includes $419.5 million for development and $431.6 million for operations and maintenance over a 20-year life cycle. Program officials stated that the decrease in the cost estimate was due to a reduction in the number of component agencies that planned to implement UFMS. Specifically, UFMS removed the Federal Bureau of Prisons; Offices, Boards and Divisions; and Office of Justice Programs from the system’s deployment schedule in order to reduce the overall cost of the system. The UFMS program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a comprehensive, well-documented, and accurate estimate, it does not reflect key practices for developing a credible estimate. Table 17 provides details on our assessment of UFMS program’s cost estimate. The OSHA Information System (OIS) is a management tool consisting of a suite of applications to reduce workplace fatalities, injuries, and illnesses through enforcement, compliance assistance, and consultation. According to the agency, OIS is intended to close performance gaps with existing legacy systems resulting from irreplaceable legacy hardware and software, the inability of legacy systems to fully support the agency’s mission, and the absence of an application that supports key business process areas, such as compliance assistance. Ultimately, OIS is expected to provide a centralized web-based solution to be used by more than 5,900 users at the federal and state level, including approximately 4,200 enforcement officers and 500 safety and health consultants. The program completed development in 2011, and is working to complete deployment of the system while addressing operations and maintenance of the system, which the program plans to complete by the end of fiscal year 2016. In 2006, the OIS program developed an initial cost of $72.3 million. This included $42.0 million for development and $30.3 million for operations and maintenance over a 12-year life cycle. Subsequently, in 2010, the OIS program revised its cost estimate to $91.3 million, an increase of $19.0 million. This includes $63.3 million for development and approximately $28.0 million for operations and maintenance over a 12- year life cycle. The OIS Program Manager stated that the increase in the estimate was due, in part, to unanticipated changes to the OIS program’s scope to better align with the Department of Labor’s strategic goals, including securing safe and healthy workplaces, particularly in high-risk industries. For example, according to this official, the agency’s methodology for penalty calculations for violators of occupational safety and health rules and regulations was modified, which required a redesign of OIS in order to capture and accurately calculate these changes. The OIS program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a comprehensive, well-documented, and accurate estimate, it does not reflect key practices for developing a credible estimate. Table 19 provides details on our assessment of the OIS program’s cost estimate. The Pension Benefit Guaranty Corporation’s (PBGC) Benefit Administration (BA) is a collection of IT systems and applications that allows PBGC to administer and service the approximately 1.5 million participants in over 4,300 plans that have been terminated and trusteed as part of PBGC’s insurance program for single-employer pensions. The BA program is intended to modernize and consolidate applications, retire legacy systems, and address performance gaps. To do this, the BA program is grouped into four projects—Customer Care, Document Management, Case Management, and Benefit Management—in support of paying accurate and timely payments and providing customer service to participants. The BA program is expected to offer multiple self-service channels to participants, reengineer benefit payment processes to increase efficiency and productivity, and implement enhanced reporting and document management systems. According to the agency, this modernization effort is ultimately expected to increase customer satisfaction, reduce operational costs, and improve data quality. Currently, the program is scheduled to complete modernization and decommission the remaining legacy applications in fiscal year 2015. In 2007, the BA program developed an initial cost estimate of $186.9 million. This included $39.4 million for development and $147.5 million for operations and maintenance over a 5-year life cycle. Subsequently, in 2010, BA revised its cost estimate to $155.9 million, a decrease of $31.0 million. This revised estimate includes $80.7 million for development and approximately $75.2 million for operations and maintenance over a 10- year life cycle. Program officials stated that the decrease in the estimate was due to changes to the program’s schedule milestones and changes to the system’s architecture. The BA program’s current cost estimate does not exhibit all qualities of a reliable cost estimate. Specifically, the estimate partially reflects key practices for developing a comprehensive, well-documented, accurate, and credible estimate. Table 18 provides details on our assessment of the BA program’s cost estimate. The Health Data Repository (HDR) is intended to support the integration of clinical data across the Department of Veterans Affairs and with external healthcare systems such as that of the Department of Defense. Specifically, the system is designed to provide a nationally accessible repository of clinical data by accessing and making available data from existing healthcare systems to support clinical and nonclinical decision- making for the care of the department’s patients. The system is being developed using an Agile software development approach and, currently, the program is working on software releases to improve the ability to access data in VA’s legacy healthcare information system, and intends to achieve full operating capability in 2017. In 2001, the HDR program developed an initial cost estimate of $126.7 million. This included $105.9 million for development and $20.8 million for operations and maintenance over a 7-year life cycle. According to officials, the program revised its estimate each year during the budget cycle; in 2011, HDR revised its cost estimate to $491.5 million, an increase of approximately $364.8 million over its initial cost estimate. This includes $281.9 million for development and $209.6 million for operations and maintenance over a 17-year life cycle. Program officials stated that the increase in the cost estimate was primarily due to the unplanned deployment and operation of a prototype system for 5 years, and the delay of the planned date for full operational capability from 2006 to 2017, in part, because of changes in the program’s scope and technology refreshes (i.e., equipment and storage capacity). The HDR program’s current cost estimate does not exhibit any of the qualities of a reliable cost estimate. Specifically, the estimate does not reflect key practices for developing a comprehensive, well-documented, accurate, and credible estimate. Table 20 provides details on our assessment of HDR program’s cost estimate. The Veterans Benefits Management System (VBMS) is intended to provide a paperless claims processing system to support processing a growing volume of claims—for example, the number of compensation and pension claims submitted in a year passed 1 million for the first time in 2009. According to the department, due to the reliance on paper-based processing, the current system is inefficient and costly, and carries risks to veterans’ sensitive information. To address this, VBMS is designed to provide veterans a secure and accessible means to obtain benefits, reduce the claims backlog, implement standardized business practices, and support the integration with other veteran-facing systems. The program is currently developing functionality for compensation and pension claims processing, and plans to add additional lines of business in future years. In 2008, the VBMS program developed an initial, high-level cost estimate of $560.0 million for system development over a 5-year life cycle, which did not include costs for operations and maintenance. Subsequently, after revising the estimate each year as part the program’s Office of Management and Budget Exhibit 300 submission, in 2011 VBMS revised its cost estimate to $934.8 million, an increase of approximately $374.8 million over its initial estimate. This includes $433.7 million for development and $501.1 million for operations and maintenance over an 11-year life cycle. Program officials stated that the increase in the estimate was primarily due to incorporating costs associated with operations and maintenance and effort spent on changing to an Agile development approach. The VBMS program’s current cost estimate does not exhibit all of the qualities of a reliable cost estimate. Specifically, while the estimate partially reflects key practices for developing a comprehensive and well- documented estimate, it does not reflect key practices for developing an accurate and credible estimate. Table 21 provides details on our assessment of the VBMS program’s cost estimate. Collectively, 13 of the 16 case study programs have revised their cost estimates upward by almost $5 billion. More specifically, the 13 programs have experienced cost increases ranging from about $6 million to over $2 billion. For example, in many cases, cost estimates had to be revised upwards to reflect the incorporation of full costs for all life-cycle phases (e.g. development or operations and maintenance), which had not originally been included. Other reasons that programs cited for revising their life-cycle cost estimates upward included changes to program or system requirements, schedule delays, technology upgrades, and system defects, among other things. Among the remaining 3 programs, 1 program’s cost estimate had decreased, 1 had not changed, and 1 was not applicable because the program only had a current cost estimate (see table 22). In addition to the contact name above, individuals making contributions to this report included Eric Winter (Assistant Director), Mathew Bader, Carol Cha, Jennifer Echard, J. Christopher Martin, Lee McCracken, Constantine Papanastasiou, Karen Richey, Matthew Snyder, and Jonathan Ticehurst.
The federal government plans to spend at least $75 billion on information technology (IT) investments in fiscal year 2012. The size of this investment highlights the importance of reliably estimating the costs of IT acquisitions. A reliable cost estimate is critical to the success of any IT program, providing the basis for informed decision making and realistic budget formation. Without the ability to generate such estimates, programs risk missing their cost, schedule, and performance targets. GAO was asked to (1) assess selected federal agencies’ implementation of cost-estimating policies and procedures, and (2) evaluate whether selected IT investments at these agencies have reliable cost estimates to support budget and program decisions. To do so, GAO compared policies and procedures to best practices at eight agencies. GAO also reviewed documentation supporting cost estimates for 16 major investments at these eight agencies—representing about $51.5 billion of the planned IT spending for fiscal year 2012. While the eight agencies GAO reviewed—the Departments of Agriculture, Commerce, Defense, Homeland Security, Justice, Labor, and Veterans Affairs, and the Environmental Protection Agency—varied in the extent to which their cost-estimating policies and procedures addressed best practices, most had significant weaknesses. For example, six of the eight agencies had established a clear requirement for programs to develop life-cycle cost estimates. However, most of the eight agencies’ policies lacked requirements for cost-estimating training, a standard structure for defining work products, and a central, independent cost-estimating team, among other things. The weaknesses in agencies’ policies were due, in part, to the lack of a priority for establishing or enhancing department or agency-level cost-estimating functions. Until agencies address weaknesses in their policies, it will be difficult for them to make effective use of program cost estimates for informed decision making, realistic budget formation, and meaningful progress measurement. The 16 major acquisition programs had developed cost estimates and were using them, in part, to support program and budget decisions. However, all but 1 of the estimates were not fully reliable—meaning that they did not fully reflect all four characteristics of a reliable cost estimate identified in the GAO cost-estimating guide: comprehensive, well-documented, accurate, and credible. For example, the estimates for many of these investments did not include all life-cycle costs, such as costs for operating and maintaining the system; did not adequately document the source data and methodologies used to develop the estimate; were not regularly updated so that they accurately reflected current status; and lacked credibility because they were not properly adjusted to account for risks and uncertainty. The inadequate implementation of cost-estimating best practices was largely due to weaknesses in agencies’ policies. Until cost-estimating best practices are fully implemented, these programs face an increased risk that managers will not be able to effectively use their cost estimates as a sound basis for informed program and budget decision making. GAO is recommending that the selected agencies modify cost-estimating policies to be consistent with best practices and update future cost estimates of the selected acquisition programs to address identified weaknesses. The seven agencies that commented on a draft of this report generally agreed with GAO’s results and recommendations, although the Environmental Protection Agency disagreed with the assessment of one of its investments. However, GAO stands by its assessment.
The Department of Education manages the federal investment in education and leads the nation’s long-term effort to improve education. Established as a separate department in 1980, Education’s mission is to ensure equal access by the nation’s populace to education and to promote improvements in the quality and usefulness of education. For fiscal year 1995, Education was appropriated $32.4 billion and authorized 5,131 FTE positions to administer and carry out its 240 educational assistance programs, including aid to distressed schools through the Elementary and Secondary Education Act, support for technical training through the Carl D. Perkins Vocational and Applied Technology Education Act, support for special education programs for the disabled, and support for higher education through subsidized and unsubsidized loans and grant programs. Although Education only became a department in 1980, its field structure dates back to 1940 when the Office of Education had its own representatives in federal regional offices to assist in administering federal education laws. Historically, the major function of these offices has been to help local administrators understand federal education legislation and obtain available federal funds for education purposes. The Department of Labor’s mission is to foster, promote, and develop the welfare of U.S. wage earners; improve their working conditions; and advance their opportunities for profitable employment. In carrying out this mission, Labor—established as a department in 1913—administers and enforces a variety of federal labor laws guaranteeing workers’ rights to work places free from safety and health hazards, a minimum hourly wage and overtime pay, unemployment insurance, workers’ compensation, and freedom from employment discrimination. Labor also protects workers’ pension rights; provides for job training programs; helps workers find jobs; and tracks changes in employment, prices, and other national economic measurements. Although Labor seeks to assist all Americans who need and want to work, special efforts are made to meet the unique job market needs of older workers, economically disadvantaged and dislocated workers, youth, women, the disabled, and other groups. In fiscal year 1995, Labor had a budget of $33.8 billion and was authorized 17,632 FTE positions to administer and carry out its activities. In fiscal year 1995, the Department of Education had 72 field offices and the Department of Labor had 1,074. These field offices were located in 438 localities across the 50 states, the District of Columbia, and two territories (see fig. 1). Concentrations of offices are found in the 10 federal region cities, where a total of 279 Education and Labor field offices are located, with a total of 5,987 staff (see table 1). About 245 localities had a single Education or Labor field office, and 148 localities had between two and five offices (see fig. 2). Six of Education’s 17 major components maintained field offices (see table 2). Each of the six Education components with field offices had an office in all 10 federal region cities. In total, 94 percent of Education’s field staff were located in these 10 cities. The concentration of Education’s field offices in the federal region cities is a reflection of the role of Education’s field structure, which is principally to ensure the integrity of grant and loan programs and to ensure that federal programs are equitably accessible. For example, the Office of Postsecondary Education (OPE) formulates policy and oversees the student loan program and other sources of federal support for postsecondary students and schools. The OPE field offices carry out technical assistance, debt collection, and monitoring activities that affect students, institutions, contractors, lenders, and guaranty agencies. The mission of OCR is somewhat different in that its responsibility is to enforce civil rights laws in the nation’s schools. Its regional offices carry out these functions. Two-thirds of the Department of Education’s staff was located in headquarters in fiscal year 1995. Of Education’s 5,131 authorized FTE positions, 4,835 were actually used and 1,501, or about 31 percent of this amount, were used to support Education’s field operations. Staff usage for three components—OCR, OIG, and OPE—taken together represented 90 percent of Education’s field strength in fiscal year 1995. OCR and OIG used the preponderance of their staff resources in their field offices— about 80 percent for OCR and 68 percent for OIG (see fig. 3). OPE had about a third of Education’s total field staff positions. In fiscal year 1995, 1,074 field offices supported 17 of Labor’s 26 components (table 3). Of Labor’s total authorized staffing of 17,632 FTEs, about 63 percent (11,095) were allocated to field offices. Labor’s field offices were in a total of 437 localities across the nation. About 21 percent (229 offices) of Labor’s field offices and 42 percent of on-board field staff were located in the 10 federal region cities; together these offices were supported by 4,486 staff. Most of Labor’s components with field offices had more than half of their staff resources assigned to the field (see fig. 4). MSHA has the highest proportion of its staff positions in the field, 91 percent, to inspect mines and protect the life and health of the nation’s miners. Similarly, the Occupational Safety and Health Administration had about 82 percent of its staff positions allocated to its field offices. ESA had 84 percent of its 3,677 staff resources allocated to its 396 field offices. The concentration of Labor’s staff in its field offices reflects the primary mission of these components’ responsibilities. For example, ESA, MSHA, the Occupational Safety and Health Administration (OSHA), and the Pension and Welfare Benefits Administration are all focused on ensuring workers’ rights to safe, healthful, and fair workplaces through their enforcement and inspection activities. The occupational series that predominated in both Departments varied by component and were related to the mission of the component. For example, half the field staff of Education’s Office of Special Education and Rehabilitative Services were rehabilitation services program specialists, about half the staff of OCR were equal opportunity specialists, and about 60 percent of OIG’s field staff were auditors (see table 4). Similarly, Labor’s field staff occupational series were related to a component’s primary functions. For example, in fiscal year 1995, ESA had three major subcomponents, each with a different mission; thus, a third of its staff were wage and hour compliance specialists, a quarter were workers’ compensation claims examiners, and about 20 percent were equal opportunity specialists (see table 5). Two-thirds of OSHA’s staff were safety/health specialists or industrial hygienists. Field office staff at both Departments were composed primarily of employees in General Schedule (GS) or General Management (GM) grades 11 through 13, representing about 60 percent of both Education and Labor field staff (see fig. 5). Seven percent of both Education and Labor field staff were senior managers (GS-14 and –15). Together Education and Labor spent about 1.3 percent ($867 million) of their combined budget of approximately $66 billion in support of their field operations; more than three quarters of this amount was for staff salaries and benefits. According to GSA, Education’s 72 field offices occupied about 495,000 square feet of space. Approximately 357,000 square feet of Education’s field office space was leased from private entities, while 28 percent was federally owned. In fiscal year 1995, Education spent about $112 million on field office costs such as rent and utilities, staff salaries and benefits, and other administrative costs (see fig. 6). According to GSA, Labor occupied a total of 3 million square feet of space, 2.1 million square feet of which was leased. Labor spent a total of $755 million on its field operations, mostly for staff salaries. Both Education and Labor have eliminated and/or consolidated a few field offices within the last 5 years to improve service delivery or office operations. Within Education, such restructuring activities occurred in OIG and OCR, while at Labor, ESA, the Office of the American Workplace (OAW), and the Office of the Solicitor reported that they are reorganizing their field offices and functions along with the Employment and Training Administration (ETA), MSHA, OIG, the Office of the Assistant Secretary for Administration and Management (OASAM), and the Veterans’ Employment and Training Service (VETS). In fiscal year 1995, Education’s OIG restructured its 10 regional and 11 field offices into four areas: the Northeast Area includes Boston, New York, Philadelphia, and the Division of Headquarters Operations; the Capital Area includes Headquarters Audit Region and Accounting and Financial Management staff; the Central Southern Area includes Atlanta and Chicago; and the Western Area includes Dallas, Kansas City, Denver, San Francisco, and Seattle. The OIG reduced the amount of rented space in 10 locations to lower its leasing costs and eliminated the Baton Rouge field office and the Denver regional office as of June 30, 1996. Education’s OCR is in the process of reorganizing its headquarters division and 1 field and 10 regional offices into four mega-regions called enforcement divisions. These enforcement divisions will be (1) Enforcement Division A—New York, Philadelphia, and Boston; (2) Enforcement Division B—Atlanta, Dallas, and the new Washington, D.C./Metro office; (3) Enforcement Division C—Kansas City, Chicago, and Cleveland; and (4) Enforcement Division D—Seattle, San Francisco, and Denver. (For a more complete discussion of Education field office changes, see the component profiles in app. II.) In fiscal year 1995, Labor’s Office of the Solicitor examined its regional office structure in light of agencywide streamlining and reinvention initiatives. The analysis led to the decision to close the Solicitor’s branch office in Ft. Lauderdale, Florida. By fiscal year 1999, Labor plans to have completed the reorganization of ESA’s Wage and Hour Division and its Office of Federal Contract Compliance Programs (OFCCP) field operations. Wage and Hour’s eight regional offices will be reduced to five through the consolidation of its current (1) Philadelphia, New York and Boston regional offices into a northeast regional office and (2) Chicago and Kansas City regional offices into a single office. Labor also plans to reduce the number of Wage and Hour district offices and increase its area offices. This will essentially involve redefining the duties of about 10 district offices to provide more frontline services and fewer management-related activities. Also, through employee attrition, management/supervisory staff buyouts, and selective staff hiring, Labor plans to reduce the number of its Wage and Hour staff and its management-to-staff ratios to increase the proportion of frontline employees to better serve its many customers. Four of OFCCP’s regional offices will be combined into two. Its current Chicago and Kansas City regional offices will be merged to form one new office, and its Dallas and Denver regional offices will be combined to form the other. Also, Labor plans to eliminate at least two OFCCP district offices. OAW is in the process of reorganizing to streamline field office management and operations. The target field structure would consist of 20 field offices and 13 resident investigator offices divided into five geographic regions. The reorganization is expected to eliminate two and, in some instances, three layers of program review, significantly expand supervisory span of control, and increase the number of resident investigative offices. ETA has begun to reassess its field structure and is considering realigning and/or consolidating certain programs, functions, services, and field offices. ETA is currently reevaluating its operations in the 10 federal region cities with a view to locating them in the same area or building where feasible. ETA has reduced its total staff by 20 percent, well above its streamlining goal of a 12 percent reduction in total staffing by fiscal year 1999. Four other Labor components—MSHA, OIG, OASAM, and VETS—have also been involved in restructuring efforts. In fiscal year 1995, MSHA eliminated several of its coal mine safety and health subdistrict offices as a way to eliminate a managerial layer. Plans to restructure the OIG’s entire field structure were in process in fiscal year 1995 resulting in the elimination of eight field offices in fiscal year 1996 and a realignment of management functions and fewer GS-15 positions. The OIG is currently evaluating its Washington, D.C., field offices. OASAM, while maintaining a physical presence in each of its regions, reduced its number of regional administrators from 10 to 6. VETS is awaiting congressional approval to reduce the number of field offices that support its operations. (For a more complete discussion of Labor field office changes, see the component profiles in app. III.) The Department of Education provided us with technical comments on a draft of this report, which we have incorporated as appropriate. Education’s letter is printed in appendix VI. The Department of Labor also provided us with comments on a draft of this report and made two specific comments. First, it questioned our definition of a field office, and was concerned that using the same term to refer to all types of offices implied they were all of the same value and that this would be misleading to the reader. The list of field offices we used in this report was provided to us by Labor. In addition, the definition of field office used in this report is consistent with the information contained in our June 1995 report, Federal Reorganization: Congressional Proposal to Merge Education, Labor, and EEOC (GAO/HEHS-95-140, June 7, 1995), upon which this report follows up. The definition we used separately counts offices that had different functions or were part of different components, even if they were at the same location. The information contained in appendix III of this report explains the roles, functions, and differences between the various types of field offices associated with each of Labor’s components. Second, Labor questioned the utility of using fiscal year 1995 data, noting that the Department was making changes in its field operations that the use of fiscal year 1995 information would not capture. We used fiscal year 1995 data because it was the most recent, comprehensive, and consistent information available on Education’s and Labor’s headquarters’ and field operations. The detailed discussion of Labor’s components, their staffing, costs, and field office functions contained in appendix III was designed to provide a current and up-to-date picture of the Department’s field operations. It also contains a separate discussion of field office and organizational changes that have occurred since September 30, 1995, and notes future changes that Labor told us were planned. Labor also provided us with technical comments, which we incorporated as appropriate. Labor’s comments are printed in appendix VII. We are sending copies of this report to the Secretaries of Education and Labor; the Director, Office of Management and Budget; and other interested parties. Please contact me on (202) 512-7014 or Sigurd R. Nilsen, Assistant Director, on (202) 512-7003 if you have any questions about this report. GAO contacts and staff acknowledgments are listed in appendix VIII. We designed our study to gather information on the Departments of Education and Labor field office structures. Specifically, we gathered data on the location, staffing, square footage, and operating cost for each Department in total and its field offices. For purposes of our review, we defined a field office as any type of office other than a headquarters office—for example, a regional office, district office, or area office— established by an Education or Labor component. To perform our work, we obtained and analyzed General Services Administration (GSA) facility data and the Departments’ staffing, cost, and location data. We did our work between January and July 1996 in accordance with generally accepted government auditing standards. Data were obtained from a variety of sources because no one single source maintained all the information we sought. GSA provided data on the amount of space occupied, space usage, and rent and utilities costs for each of Labor’s components by city and state. GSA also provided total space and rent and utility cost information for Education, without component breakouts. Education provided information on the square footage occupied by its field offices and their rent and utility costs. Education also provided information on full-time equivalent (FTE) staff positions; on-board staff; personnel costs (salaries and benefits); other operating costs, such as travel and supplies; and office locations by field office. All information received from Labor was obtained through the Office of the Assistant Secretary for Administration and Management (OASAM). Labor provided data on FTEs by component. To calculate on-board staff counts, we obtained an extract of Labor’s personnel management information system showing personnel by component by city and state location. These data were augmented with information from Labor’s components. Additionally, Labor provided departmentwide and field information on personnel and other costs by component—but not by field office. To analyze field office space and rent and utility cost data, we obtained an extract of GSA’s Public Building Service Information Systems (PBS/IS) and National Electronic Accounting Report System (NEARS) databases covering all Labor and Education space rented or owned by GSA as of September 30, 1995. The PBS/IS database contained square footage allocations and information on space usage and the status and duration of the lease or rental. The NEARS database contained rent and utilities cost information. Both files were organized by GSA assignment number—that is, the unit used by GSA for billing the Departments. The file contained 1,056 unique assignment numbers for Labor and 62 for Education. These assignment numbers do not necessarily indicate different locations or individual field offices. The focus for this review was on field office rather than headquarters function and space. The GSA files used for our square footage, space usage, and rent and utility cost analyses did not contain information linking square footage with the organizational level—for example, area, district, regional, or headquarters—of the specific office. This created a special problem for identifying Washington, D.C., field offices. Thus, because we were unable to separate Washington, D.C., field offices from headquarters, for the purposes of identifying square footage and rent and utility costs, we treated all offices located in Washington, D.C., as headquarters.Eliminating the D.C. offices from this analysis resulted in the exclusion of 18 cases for Education and 17 for Labor, giving us 44 assignment numbers for Education and 1,039 for Labor in our analytic file. Because the level of detail of GSA’s information on Education’s space was not equivalent to that provided for Labor—that is, for Education we could not identify organizational level, or component, associated with square footage or cost, nor could we identify square footage by use category—we augmented the data for Education with information directly from the Department. In presenting detailed square footage estimates for Labor in appendix III, we used GSA’s four use categories—total square footage; office space; storage; and special square footage, which includes training areas, laboratories and clinics, automated data processing, and food service space. Discussions of square footage for Education in appendix II are in the 3 categories as forwarded to us by the Department—office, parking, and storage. Total agency square footage estimates presented in the body of the report for both Labor and Education—including rent and utilities costs—were provided to us by GSA. To determine the number of Education and Labor field offices and their locations, we used data prepared for us by the Departments. This information was in the form of listings, organized by component, linking organizational level—such as regional office or district office—with the relevant city and state where an office was located in fiscal year 1995. These listings identified 72 Education (as of April 20, 1995) and 1,037 Labor field offices (as of August 1, 1995). Additional Labor field offices were identified in other documents provided by the Department. As a result, our field office database increased to 1,056 Labor field offices. We based our analyses on this count of Labor offices along with the 72 Education field offices. After Education and Labor reviewed a draft of this report, Labor revised its count of field offices, amending its previous list of field offices operational in fiscal year 1995 as provided to us on August 1, 1995. Our final field office database contained 1,074 Labor and 72 Education field offices. The Departments differed in their ability to provide FTE data. We obtained from Education the number of FTEs used—not authorized—by component and field office because Education does not allocate authorized FTEs below the component level. We obtained from Labor, authorized and used FTEs by component, but not by field office because Labor does not track either authorized or used FTEs at this level. Both Departments provided us with agencywide FTE data. For on-board staff, the Departments provided nonidentifying data on the grade, occupational series, organizational, and geographic location for each employee as of September 30, 1995. Our analysis of Labor field office on-board staff was based on information extracted from the Department’s personnel management information system, which indicated 10,632 on-board staff as of September 30, 1995. After reviewing a draft of this report, Labor revised its count of on-board staff to 10,654 on the basis of input by its components. Personnel cost data (salary and benefits) along with other cost information for items such as supplies, materials, and travel was provided by the Departments in summary form by component at the national level. For both location and staffing information, we aggregated the data and prepared summary statistics by component, city, and state. Similarly, we developed summary statistics of city and state localities for field offices and field staff. Some individuals were employed at locations other than an official field office. Therefore, the total number of localities for field staff is greater than the number of localities for field offices. Unlike Education, Labor does not centrally maintain information on its components’ field office locations, staffing, and costs. Instead, each component maintains such information itself and provides OASAM with information as requested. Thus, much of the information we requested from Labor for the individual components had to be obtained from the components by OASAM. Although each component was asked to give the same information, there is no assurance that all the information provided used consistent definitions and collection methods. Thus, some variation in data quality and consistency is possible. We were unable to report data for those Labor field offices that were housed in state-owned buildings because our analysis of field office space and costs was limited to available GSA data. Additionally, because we could not directly identify square footage and rent and utility costs associated with field office functions located in headquarters space, we eliminated all Washington, D.C., locations from our field office analysis of space and rent and utility costs. This results in the estimates of costs and space for field locations to be understated by the amount allocated to field offices within the District of Columbia. Actual total field office space and rent and utility costs, therefore, may be somewhat higher than reported here. Additionally, square footage use categories reported for Labor were provided by GSA, while Education provided the information itself. Because these data were obtained from two different sources, the resultant calculations cannot be directly compared. We did not visit the field offices and could not evaluate the adequacy of the reported space provided, nor could we determine whether the number and skill levels of the staff were sufficient to perform field office activities. In addition, we did not verify any of the data provided on field office location or staffing by the Departments, nor did we independently verify the accuracy of the data provided by GSA. This appendix provides a snapshot of the Department of Education’s field offices as of September 30, 1995. Each profile shows the locations of and describes the mission and activities performed by the field offices supporting six Education components in fiscal year 1995. In addition, each profile provides the following information about the field offices: (1) staffing, (2) space occupied, (3) costs to operate, and (4) field office restructuring activities or plans. (See table II.1 for a summary of staffing, space, and cost data for all six components.) In these profiles, regional, area, district, state, and other types of offices are referred to generically as field offices. Unless otherwise noted, we used Education data to estimate the amount and cost of field office space by component because GSA does not provide square footage totals and rent/utility costs for units within Education. We also used Education data to identify the locations of official field offices; the FTE usage and on-board personnel strength of each component; salary, benefit, and other field office costs; and information about field office restructuring activities within the Department. Space (square feet) Costs (dollars in millions) Office of the Inspector General Office of Intergovernmental and Interagency Affairs Office of Special Education and Rehabilitative Services Space is provided by the Office of Intergovernmental and Interagency Affairs. Space rental costs are included with rental costs for the Office of Intergovernmental and Interagency Affairs; staff salaries and benefits and other costs are not available. The primary mission of the Office for Civil Rights (OCR) is to enforce civil rights laws in America’s schools, colleges, and universities. OCR focuses on preventing discrimination from occurring. Staff in OCR’s 11 field offices (see fig. II.1) investigate and resolve individual and class complaints of discrimination filed by members of the public and initiate compliance reviews of local and state educational agencies or higher education institutions. Field office staff provide targeted technical assistance in priority areas and respond to individual requests for information and assistance. According to OCR officials, field offices are maintained because compliance activities often require on-site investigations at educational agencies and institutions throughout the country. When conducting compliance activities, it is beneficial for OCR field staff to have the support of state and local educational institutions. Table II.2 provides key information about the 10 regional offices and 1 field office that compose OCR’s field office structure. OCR has had a field office presence in all 10 federal region cities (Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle) in addition to an office in Cleveland, Ohio, before the establishment of the Department of Education in 1980. OCR field offices in the regions are located with all other Education field offices in the regions. As of September 30, 1995, more than half of OCR’s field employees were equal opportunity specialists, attorneys, and investigators. Most of the remaining staff performed administrative and managerial duties, such as program manager, management assistant, and administrative officer (see fig. II.2). Two-thirds of the employees ranged between GS-11 and GS-13 (see fig. II.3). Ten of the 11 OCR field offices were regional offices. The Atlanta regional office (Region IV) had the most on-board staff (102), and the Cleveland field office in Region V had the fewest staff (27) (see table II.3). Boston (regional office) New York (regional office) Philadelphia (regional office) Atlanta (regional office) Chicago (regional office) Cleveland (field office) Dallas (regional office) Kansas City (regional office) Denver (regional office) San Francisco (regional office) Seattle (regional office) OCR occupied about 154,848 square feet of Education’s total field office space. Of that space, OCR leased 99,806 square feet (64 percent) in privately owned buildings, and 55,042 square feet (36 percent) was in GSA-owned buildings. OCR used about 99 percent of this space for offices and the remainder for storage (see fig. II.4). OCR’s total field office costs were $43.7 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $3.2 million, staff salaries and benefits totaled $35.7 million, and other costs totaled $4.8 million. Currently, OCR is reorganizing its headquarters division and field office into four mega-regions, called enforcement divisions, consisting of 12 sites. The enforcement divisions will be split into Enforcement Division A, which includes New York, Philadelphia, and Boston; Enforcement Division B, which includes Atlanta, Dallas, and the new Washington, D.C./Metro office; Enforcement Division C, which includes Kansas City, Chicago, and Cleveland; and Enforcement Division D, which includes Seattle, San Francisco, and Denver. The redesign of OCR’s field management structure is proposed to increase efficiency in complaint resolution, provide for better resource coordination and allocation, and reassign a significant percentage of headquarters staff to case-specific duties. According to Education, the change will also reduce administrative layers and supervisory staff to address the goals of the Vice President’s National Performance Review. The primary mission of the Office of Inspector General (OIG) is to (1) increase the economy, efficiency, and effectiveness of Education programs and operations and (2) detect and prevent fraud, waste, and abuse in them. Staff in 21 field offices are responsible for auditing and investigating activities related to Education’s programs and operations in their respective geographic locations (see fig. II.5). Staff perform program audits to determine compliance with applicable laws and regulations, economy and efficiency of operations, and effectiveness in achieving program goals. Auditors and investigators inspect entities about which there are indications of abuse significant enough to warrant a recommendation to curtail federal funding. Staff also investigate allegations of fraud by recipients of program funds and employee misconduct involving Education’s programs or operations. Because program effectiveness audits require on-site work to accurately assess program results, according to Education, field offices help to save travel dollars. A field presence also encourages the development of strong working relationships with state and local officials. The information gleaned from these officials increases the OIG’s effectiveness. Table II.4 provides key information about the 10 regional offices and 11 suboffices (known within Education as field offices) that compose OIG’s field office structure. OIG maintained a field office presence in many of its regions prior to the establishment of the Department of Education in 1980. In fiscal year 1995, OIG operated more field office locations than all the other Education components. Only two (OIG and OCR) of Education’s six components maintained field offices other than regional offices. OIG staff were located in nine federal region cities: the Washington, D.C., headquarters office; and 11 field locations (Boston; New York; Philadelphia; Atlanta; Chicago; Dallas; Kansas City; Denver; San Francisco; Seattle; Puerto Rico; Pittsburgh; District of Columbia; Nashville; Plantation, Florida; St. Paul; Austin; Baton Rouge; Long Beach; and Sacramento). OIG field offices in the federal regions are located with all Education field offices. As of September 30, 1995, auditors and criminal investigators made up approximately 92 percent of OIG’s field office staff. The remaining staff performed managerial and administrative duties, such as management services specialist, investigative assistant, administrative officer, and clerk (see fig. II.6). Seventy-two percent of the employees were in grades ranging from GS-11 to –13 (see fig. II.7). The Chicago regional office had the most on-board staff (28), and two offices—Nashville and Seattle—had the fewest staff (4 persons each) (see table II.5). Boston (regional office) New York (regional office) Puerto Rico (field office) Philadelphia (regional office) Pittsburgh (field office) Washington, D.C. (field office) Atlanta (regional office) Plantation, Fla. (field office) Nashville (field office) Chicago (regional office) St. Paul (field office) Dallas (regional office) Austin (field office) Baton Rouge (field office) Kansas City (regional office) Denver (regional office) San Francisco (regional office) Long Beach (field office) Sacramento (field office) Seattle (field office) Washington, D.C. (regional office) OIG field offices occupied 74,594 square feet of Education’s total field office space. Of that space, OIG leased 45,050 square feet (60 percent) in privately owned buildings, and 29,544 square feet (40 percent) of space was in GSA-owned buildings. OIG used about 84 percent of this space for offices and the remainder for parking and storage (see fig. II.8). OIG’s total field office costs were $18.3 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $1.3 million, staff salaries and benefits totaled $14.2 million, and other costs totaled $2.8 million. As of July 1995, OIG restructured its 10 regional and 11 field offices into four areas: the Northeast Area (includes Boston, New York, Philadelphia, and the Division of Headquarters Operations); the Capital Area (includes Headquarters Audit Region and Accounting and Financial Management staff); the Central Southern Area (includes Atlanta and Chicago); and the Western Area (includes Dallas, Kansas City, Denver, San Francisco, and Seattle). As of June 1996, OIG completed cost-cutting initiatives as follows: Reduction of space in selected areas to minimize leasing costs, including the identification of four nonheadquarters sites for possible rent savings thus far: Austin, Nashville, Seattle, and St. Paul. The elimination of one field office (Baton Rouge) and one regional office (Denver) where the amount of work no longer justifies an on-site presence. A number of auditor and investigative positions will be filled at other locations where the workload warrants additional staff. The primary mission of the Office of Intergovernmental and Interagency Affairs (OIIA) is to provide intergovernmental and public representation of the Secretary and the Department except in matters where Assistant Secretaries or their equivalents manage regional operations. OIIA is responsible for providing overall leadership in coordinating regional and field activities. OIIA has a Secretary’s regional representative in each of its 10 regional offices who serves as the Secretary’s field office representative. (See fig. II.9.) The primary mission of the Office of Management (OM) is to provide the administrative services required to assist field office staff. According to Education, regional staff (1) administer the Federal Real Property Assistance Program, to ensure maximum utilization of surplus federal property for educational purposes, and (2) provide personnel services to regional employees in other program offices. Table II.6 provides key information about the 10 regional offices that compose OIIA and OM’s field office structure. Education did not provide separate costs information for OM. Education does not maintain information on headquarters office rent by component. Rent for OM field office staff is included with OIIA rental costs. OIIA and OM had staff in the 10 federal region cities (Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle). In fiscal year 1995, the total on-board staff in OIIA’s and OM’s 10 field offices was 69 (47 for OIIA and 22 for OM). As of September 30, 1995, OIIA and OM staff performed duties in 10 job categories. OIIA had staff in six of those categories and OM had staff in five. Staff in clerical job categories supported both OIIA and OM. Three-fourths of regional OIIA staff were classified as Secretary’s regional representative, program assistant/clerk, and public affairs specialist. Approximately 73 percent of OM staff performed duties as personnel management specialists. The remaining staff performed other managerial and administrative duties, such as personnel assistant, secretary, clerk, realty specialist (OM), education program specialist (OIIA), and administrative officer (OIIA) (see figs. II.10 and II.11). OM had no staff at the GS-15 level; however, 21 percent of OIIA staff were GS-15s—representing the largest percentage of staff at any one grade level in the component. These GS-15s generally served as Secretary’s regional representatives. OIIA staff were almost evenly distributed among grades GS-1 through –13. Most OM staff were in grades GS-11 through –13 (see figs. II.12 and II.13). All 20 of the OIIA and OM field offices were regional offices (see table II.7). In fiscal year 1995, OIIA occupied 46,315 square feet of Education’s total field office space. Of that space, OIIA leased 28,561 square feet (62 percent) in privately owned buildings and 17,754 square feet of space (38 percent) was in GSA-owned buildings. OIIA used 99 percent of this space for offices and the remainder for storage (see fig. II.14). OIIA’s total field office costs were $4.6 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $948,000, staff salaries and benefits totaled $2.8 million, and other costs totaled $915,000. OM cost information for field office staff salaries and benefits and other costs was unavailable. None. The primary mission of the Office of Postsecondary Education (OPE) is to administer postsecondary education and student financial assistance programs. Programs of student financial assistance include Pell grants, supplemental educational opportunity grants, grants to states for state student incentives, direct loans to students in institutions of higher education, work-study, and the guaranteed student loan program. OPE programs also provide assistance for increasing access to postsecondary education programs, improving and expanding American educational studies and services, improving instruction in crucial academic subjects, and supporting international education. OPE maintains 10 field offices to perform activities associated with (1) training, technical assistance, and oversight of student aid programs, (2) loan servicing and debt collection, and (3) overseeing specific higher education projects (see fig. II.15). Field staff conduct program reviews of institutions to determine compliance with Title IV requirements, provide training and technical assistance for financial aid and business officers at institutions, and monitor operations at guaranty agencies. Staff also collect defaulted loans and other debts, contract with servicers, monitor collection contracts, and help in the preparation of legal actions. Regional staff also serve as focal points and as experts assisting with field readings for OPE’s higher education programs. Staff may also be called on to work on school-to-work initiatives. According to Education, because field office staff gain in-depth knowledge of the institutions in their regions, effectiveness is increased. Regional training facilities provide hands-on use of computer programs needed toward student aid and determine student eligibility. They are also a place for institutions, lenders, and guaranty agencies to call upon for technical assistance and specific help on an individual basis. In addition, several oversight activities are supported by information gathered from on-site reviews. Table II.8 provides key information about the 10 regional offices that constitute OPE’s field office structure. In fiscal year 1995, OPE’s Field Operations Service and Division of Project Services had staff in all 10 federal region cities, and Debt Collection Service had staff in three region cities—Atlanta, Chicago, and San Francisco. Half of all OPE employees were specialists in one of the following: job categories lender review specialist, institutional review specialist, contract monitor specialist, training specialist, paralegal specialist, education program specialist, computer specialist, or accounts resolution specialist/clerk. The remaining staff included management analysts, student financial accounts examiners, program manager, data transcriber, administrative officer, and clerk (see fig. II.16). About half of the employees were in grades ranging between GS-11 and –13. Most of the remaining employees were in grades ranging from GS-7 through –10 (see fig. II.17). The Chicago regional office had the most on-board staff, and Boston had the fewest staff (see table II.9). In fiscal year 1995, OPE occupied about 125,456 square feet of Education’s total field office space. Of that space, OPE leased 82,587 square feet (66 percent) in privately owned buildings and 42,869 square feet of space (34 percent) in GSA-owned buildings. OPE used about 99 percent of this space for offices and the remainder for parking (see fig. II.18). OPE’s total field office costs were $38.5 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $2.5 million, staff salaries and benefits totaled $28.4 million, and other costs totaled $7.6 million. None. The Office of Special Education and Rehabilitative Services (OSERS) administers comprehensive coordinated programs of vocational rehabilitation and independent living for individuals with disabilities. OSERS programs include support for the training of teachers and other professional personnel; grants for research; financial aid to help states to initiate, expand, and improve their resources; and media services and captioned films for people who are hearing-impaired. The Rehabilitative Services Administration (RSA) is the only OSERS unit with field offices. RSA coordinates vocational rehabilitation services programs that help individuals with physical or mental disabilities to obtain employment through the provision of such supports as counseling, medical and psychological services, job training, and other individualized services. In addition, RSA coordinates and funds a wide range of formula and discretionary programs in areas such as training of rehabilitation personnel, rehabilitation research and demonstration projects, Independent Living, Supported Employment, and others. The 10 OSERS field offices (see fig. II.19) that support RSA activities provide leadership, technical assistance, monitoring, consultation, and evaluation services and coordinate RSA and other resources used in providing services to disabled individuals through state-federal administered programs and through grantees receiving discretionary project funds. These offices are also responsible for helping colleges, universities, and other organizations and agencies to develop, implement, improve, and expand training programs designed to prepare a wide variety of rehabilitation workers who provide services to disabled individuals. According to Education officials, an OSERS regional presence encourages interactions with states and providers of services and provides unique insights into the issues involved in the rehabilitation of people with disabilities. It enables federal-state interactions closer to the point of service delivery where the unique circumstances and considerations of each state and grantee are best understood. Regional office staff have more frequent and extended contacts with state agency staff and other grantees, resulting in long-term, customer-oriented relationships and trust. Table II.10 provides key information about the 10 regional offices that make up OSERS’ field office structure. OSERS had staff in all 10 federal region cities (Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle). OSERS’ field offices in the regions are located with all other Education regional offices. As of September 30, 1995, almost half of all OSERS on-board staff were classified as rehabilitation services program specialists. Almost one-third were employed as financial management specialists and grant management specialists The remaining staff were classified as clerks, staff assistants, and secretaries (see fig. II.20). Most employees were in grades ranging from GS-11 through –13 (see fig. II.21). All 10 of the RSA field offices were regional offices. The Seattle regional office had the fewest on-board staff (4), and the remaining offices had between 5 and 10 employees (see table II.11). On September 30, 1995, OSERS occupied 28,632 square feet of Education’s total field office space. OSERS leased 17,735 square feet (62 percent) in privately owned buildings and 10,897 square feet (38 percent) in GSA- owned buildings. OSERS used 97 percent of this space for offices and the remainder for storage and parking (see fig. II.22). OSERS’ total field office costs were $6.4 million in fiscal year 1995. Field office costs included rent and utilities; staff salaries and benefits; and other costs, such as travel, equipment, supplies, and materials. Rent and utility costs were $553,000, salaries and benefits were $4.8 million, and other costs were $1.1 million. None. This appendix provides a snapshot of the Department of Labor’s field offices as of September 30, 1995. Each profile shows the locations of and describes the mission and activities performed by the field offices supporting 10 Labor components in fiscal year 1995. In addition, each profile provides the following information about the field offices: (1) staffing, (2) space occupied, (3) costs to operate, and (4) field office restructuring activities or plans. (See table III.1 for a summary of staffing, space, and cost data for all 10 components.) In these profiles, regional, area, district, state, and other types of offices are referred to generically as field offices. Because neither GSA nor Labor maintains information about field offices located in state-owned buildings, we were unable to identify the exact amount and cost of all space that Labor field staff occupied in fiscal year 1995. (Labor is not billed for the use of space in state-owned buildings.) Unless otherwise noted, we used (1) GSA data to estimate the amount and cost of Labor field office space and (2) Labor information to identify the locations of official field offices; the numbers of FTEs and on-board personnel for each component; and salary, benefit, and other field office costs. Labor also provided information about field office restructuring activities. Space (square feet) Costs (dollars in millions) Many small organizations within the Department are consolidated for administrative purposes in a Departmental Management (DM) account. This account consolidates a wide range of agencywide managerial, administrative, technical, and support activities carried out by approximately 20 different units. Our discussion of Labor’s DM function includes only the following units that were supported by field offices in fiscal year 1995: (1) Assistant Secretary for Administration and Management (OASAM), (2) Office of the Solicitor (SOL), (3) Office of Administrative Law Judges (ALJ), (4) Office of Public Affairs (OPA), (5) Office of Congressional and Intergovernmental Affairs (OCIA), and (6) the Women’s Bureau (WB). Figure III.1 shows the locations of the 62 field offices that supported Labor’s DM function in fiscal year 1995. Table III.2 provides key information about DM’s 47 regional, 8 field, and 7 branch offices. As shown in table III.3, field offices in the 10 federal region cities and 11 other localities supported DM in fiscal year 1995. Camden (N.J.) Newport News (Va.) Metairie (La.) Long Beach (Calif.) Nashville (Tenn.) Birmingham (Ala.) Arlington (Va.) The field offices that support the DM function generally perform the following activities: Office of the Assistant Secretary for Administration and Management. OASAM staff are responsible for providing a centralized source of administrative, technical, and managerial support services. Each of OASAM’s 10 regional offices—located in the federal region cities—provides a full range of services to all Labor components in their field offices in the following areas: financial management, including payroll, travel, accounting and voucher payment services; personnel classification, recruitment, training, and position management services; general administrative support, including procurement, property and space management, communications, and mail services; automatic data processing management, including programming support; and safety and health services, including safety inspections of regional Job Corps Centers and support for wellness fitness programs for Labor field office employees. In addition, staff in OASAM’s regional offices helped to manage and direct affirmative action and equal employment opportunity programs within Labor, ensuring full compliance with title VII of the Civil Rights Act of 1964; title IX of the Education Amendments of 1972, as amended; title I of the Civil Rights Act of 1991; Section 504 of the Rehabilitation Act of 1973, as amended; the Age Discrimination Act of 1973, as amended; and the investigation of certain complaints alleging discrimination on the basis of disability arising under the Americans With Disabilities Act. According to Labor, OASAM’s field presence in all of these areas allows the personal contact with program managers and employees that enhances the Department’s ability to provide effective and efficient support services. OASAM’s staff work in localities with the greatest concentrations of Labor managers and employees. Office of the Solicitor. SOL is responsible for providing the Secretary of Labor and other Department officials with the legal services required to accomplish Labor’s mission and the priority goals set by the Secretary. SOL devotes over two-thirds of its resources to Labor’s major enforcement programs (for example, OSHA and MSHA). Its eight regional offices and seven branch offices provide legal services and guidance to each of the Labor component’s regional administrators. Within a specific geographic area, each regional or branch office primarily performs trial litigation support for Labor’s enforcement programs and provides legal support and services to selected Labor components that perform work in the area. Office of Administrative Law Judges. Judges at the eight field offices primarily preside over cases related to Labor’s Black Lung and Longshore programs. These programs provide income support for workers disabled in coal mining and longshore operations. Federal regulations require that hearings be held within 75 miles of a Black Lung claimant’s residence. Labor applies this standard also to Longshore cases. Approximately 60 percent of all Black Lung cases each year are handled by the three ALJ offices in the Camden, New Jersey, Cincinnati, Ohio, and Pittsburgh, Pennsylvania, field offices. Four other field offices handle 75 percent of Labor’s Longshore cases annually. According to Labor, ALJ’s field presence allows the judges to establish better working relationships with local attorneys. As a result, compliance with Labor laws is achieved more readily because the local bar is more familiar with case law in specific localities. Office of Public Affairs. Staff at OPA’s 10 regional offices, located in the federal region cities, provide, for example, (1) media relations services, such as issuing press releases and arranging media coverage of Labor programs and law enforcement actions; (2) public information services designed to educate and inform workers, employers, and the general public about their rights and responsibilities under the laws and programs administered by Labor; and (3) publicity services that advertise public meetings, conferences, and special projects sponsored by Labor’s components. According to Labor, OPA’s field offices allow staff to identify local news media and reporters that have an interest in particular Labor programs or events. Field staff are then able to alert reporters to news releases and respond to questions in a timely manner. Office of Congressional and Intergovernmental Affairs. OCIA’s function is generally performed by one person—the Secretary’s representative. These representatives (1) serve as the ongoing liaison in the region with governors, mayors, state officials, congressional offices, organized labor, and the business community; (2) represent Labor at educational forums, meetings, and regional conferences; (3) educate public officials and constituents about the policies, programs, and initiatives of the Secretary of Labor and the agency; (4) provide regional perspective and feedback to headquarters on policies and programs; and (5) carry out special projects in the regions for the Secretary. Women’s Bureau. WB’s 10 regional offices play a key role in administering two federal programs: the Nontraditional Employment for Women Act (P.L. 102-235) and Women in Apprenticeship and Nontraditional Occupation Act (P.L. 102-530). In addition, regional office staff (1) make presentations to the public and the media on a variety of issues such as women’s job rights, labor force participation, job training activities, and work place safety and health issues; (2) work with federal, state, and local government officials on behalf of working women; (3) provide technical assistance and education services to women in the workforce; and (4) organize public meetings on working women’s issues. DM staff represented over 40 different professional and administrative job categories. Attorneys and judges made up approximately 30 percent of the DM field office workforce (see fig. III.2). The remaining staff were paralegal specialists, personnel management specialists, personnel classification clerks, fiscal clerks, and accountants. Approximately 34 percent of DM field office staff were grades GS-11, –12, and –13. Staff at the GS-5 and –7 grade levels constituted 22 percent of its field office workforce (see fig. III.3). In fiscal year 1995, DM field offices occupied space in 59 buildings throughout the United States, totaling 482,648 square feet. According to GSA data, 207,813 square feet of space was owned by GSA and 274,835 square feet was leased from privately owned sources. Most of the space used by the DM functions was used for offices and the remainder for storage and other uses, such as training, conferences, and data processing (see fig. III.4). DM field costs totaled $47.2 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $8.7 million, which was 18 percent of the function’s total field office costs. Costs for staff salaries and benefits totaled $32.9 million and other costs totaled $5.6 million, which were about 70 and 12 percent, respectively, of the total field office costs for this function. In fiscal year 1995, SOL examined its regional office structure in light of agencywide streamlining and reinvention initiatives. This analysis led to the decision to close the SOL branch office in Ft. Lauderdale, Florida. Effective in fiscal year 1996, while maintaining a physical presence in each of its regions, OASAM will have reduced its number of regional administrators from 10 to 6. The primary mission of the Bureau of Labor Statistics (BLS) is to collect, process, analyze, and disseminate data relating to employment, unemployment, and other characteristics of the labor force; prices and consumer expenditures; wages and other worker compensation, and industrial relations; productivity and technological change; economic growth and employment projections; and occupational safety and health. These basic data—practically all supplied voluntarily by business establishments and members of private households—are issued in monthly, quarterly, and annual news releases; bulletins, reports, and special publications; and periodicals. Statistical data are also made available to the general public through electronic news service, magnetic tape, diskettes, and microfiche, as well as through the Internet. BLS conducts many of its mission-related activities through its eight field offices (see fig. III.5). According to Labor, BLS’ field structure maximizes the effectiveness of BLS’ data collection activities, saves travel expenditures, and accommodates workload requirements. Table III.4 provides key information about BLS’ eight regional offices. In fiscal year 1995, BLS maintained regional offices in the following cities: Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, and San Francisco. BLS regional offices (1) issue reports and releases usually presenting locality or regional issues and (2) assist business, labor, academic, and community groups with using the economic statistical data BLS produces. Regional office staff also supervise the work of part-time field staff who (1) collect data for the Consumer Price Index and occupational compensation surveys and (2) survey firms for the Producer Price and Export and Import Price programs. These “outstationed” staff performed their BLS duties in over 70 locations throughout the United States. BLS employed only about 9 percent of all Labor on-board field office staff in fiscal year 1995, but had the largest proportion of part-time staff among Labor components with field offices—34 percent of BLS staff worked part time. Part-time staff in the other components represented less than 10 percent of these components’ on-board staffs. BLS staff represented over 15 different professional and administrative job categories. Economists and economic assistants made up approximately 80 percent of BLS’s field office workforce (see fig. III.6). The remaining staff included statisticians, computer specialists, public affairs assistants, and clerical support staff. Approximately 46 percent of BLS’ field office staff were GS-11s, –12s, and –13s. Staff at the GS-5 and –6 pay levels made up about 23 percent of BLS’ field office workforce (see fig. III.7). From one to five BLS staff persons worked in 84 percent of the U.S. localities with BLS staff. Nine of these localities had over 30 BLS employees. Generally, economic assistants in grades GS-5 through –7 provided the BLS presence in those localities with only one staff person. In several cases, a GS-11 or –12 economist represented BLS in the locality. In fiscal year 1995, BLS field offices occupied space in 84 buildings throughout the United States, totaling 219,324 square feet. Over 83,600 square feet was owned by GSA and 135,659 was leased from private sources. (We were unable to determine how much space, if any, BLS occupied in state-owned buildings.) BLS used 195,663 square feet—or about 89 percent—of this space for offices and the remainder for storage and other uses (see fig. III.8). At 50 of the 84 buildings BLS occupied in fiscal year 1995, other Labor components were also located at the same address. Field costs for BLS totaled $51.1 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $4.8 million, which was 9 percent of BLS’ total field office costs. Costs for staff salaries and benefits totaled $36.5 million and other costs totaled $7.9 million, which were about 71 and 15 percent, respectively, of BLS’ total field office costs. None. The Employment Standards Administration (ESA) is responsible for administering and directing programs dealing with minimum wage and overtime standards; registration of farm labor contractors; determining prevailing wage rates to be paid on federal government contracts and subcontracts; family and medical leave; nondiscrimination and affirmative action for minorities, women, veterans, and government contract and subcontract workers with disabilities; and workers’ compensation programs for federal and certain private sector employers and employees. The field structure for ESA—a total of 396 field offices—supports three program areas—the Wage and Hour Division, the Office of Federal Contract Compliance Programs, and the Office of Workers’ Compensation Programs (see fig. III.9). The largest division within ESA is the Wage and Hour Division (WHD) with its 8 regional offices, 54 district offices, 45 area offices, and 192 field offices. According to Labor, in order to enforce federal standards for working conditions and wages, WHD focuses its investigative efforts mainly on industries that employ large numbers of workers in low-wage jobs because this is where wage, overtime, and child labor violations often occur. WHD field staff respond to complaints alleging violations and target their enforcement efforts at employers with a high likelihood of repeated and egregious violations. WHD field staff also detect and remedy violations of overtime, child labor, and other labor standards. With over 280 offices nationwide, WHD supports its mission by providing a local presence in most of the metropolitan areas of the country. According to Labor, WHD’s streamlining plan will make its mission more challenging because having fewer offices will increase travel costs and possibly impede access to some geographic areas. The Office of Federal Contract Compliance Programs (OFCCP), with its 10 regional offices, 45 district offices, and 10 area offices, conducts compliance reviews of supply, service, and construction companies with federal contracts and federally assisted programs for construction, alteration, and repair of public works. OFCCP ensures that prevailing wages are paid and overtime standards achieved in accordance with the provisions of the Davis-Bacon Act (40 U.S.C. 276a) as well as the Service Contract Act (41 U.S.C. 351), Public Contracts Act, and Contract Work Hours and Safety Standards Act. According to Labor, OFCCP’s field structure provides a local contact for representatives of federal contractors to obtain information and technical assistance when establishing their affirmative action programs. It also provides local contacts and local offices that help provide women and minorities with more employment opportunities as well as a place to file complaints against federal contractors. Labor maintains that these local offices decrease travel costs because OFCCP staff make less frequent overnight trips. The Office of Workers’ Compensation Programs (OWCP) is supported by 10 regional offices, 34 district offices, and 7 field offices that are staffed on a part-time basis. OWCP’s primary responsibilities are to administer compensation programs that pay federal employees, miners, longshore, and other workers for work-related injuries, disease, or death. These compensation programs are authorized by the Federal Employees Compensation Act, Longshore and Harbor Workers Compensation Act and its various extensions, and the Black Lung Benefits Act. OWCP also administers the Black Lung Disability Trust Fund and provides budget, automated data processing, and program technical support for the compensation programs. OWCP’s field structure, according to Labor, gives claimants and employers easier access to assistance when processing claims and provides faster and more efficient service. Field office locations are necessary to be near the homes and work places of the parties involved in claims to ensure timely reconciliation of claims and to minimize staff travel costs. Table III.5 provides key information about the 28 regional offices, 133 district offices, 199 field offices, and 55 area offices that make up ESA’s field office structure. ESA’s various field offices generally perform the following functions: Regional offices. WHD, OFCCP, and OWCP regional offices generally provide the executive direction and administrative support for all other respective field offices operating in a particular region. District offices. A WHD district office provides the day-to-day management and supervision of selected area and field offices. WHD district office staff provide education outreach and investigate alleged violations of the Fair Labor Standards Act (29 U.S.C. 201) and other labor standards laws. OFCCP district offices supervise and manage selected area offices. Within OWCP, district office staff process either Longshore and Harbor Workers, Coal Mine Workers, or Federal Employees Compensation Act claims. OWCP district offices work with all parties involved in a claim to secure the information needed to disallow or accept the claim. OWCP district offices serve as information repositories for employers and employees about the various disability compensation programs that Labor administers. Area offices. WHD area offices staff investigate alleged violations of the Fair Labor Standards Act and other labor standards laws. Labor considers WHD area office staff “frontline” employees because they inspect work sites and interview employers and employees as part of their investigatory and enforcement activities. WHD area offices also make available to employers and workers information about the Fair Labor Standards Act, other laws, and their rights and responsibilities under the law. Staff at OFCCP area offices investigate allegations of unfair bidding and hiring practices involving minority construction contractors and suppliers. OFCCP area offices also work with employers to ensure compliance with applicable federal contract laws and procedures. Field offices. WHD field offices are usually staffed by one or two compliance specialists who are also considered frontline workers by Labor. They perform the same investigatory and enforcement activities as the WHD area offices but in many more locations. OWCP’s field offices are maintained on a part-time basis by the Black Lung program and provide a local point of contact for claimants and other interested parties. ESA employed about 28 percent of all Labor on-board field office staff in fiscal year 1995. ESA staff represented over 30 different professional and administrative job categories. Wage/hour compliance specialists, workers’ compensation claims examiners, and equal opportunity specialists made up the largest proportion of ESA’s field office workforce (see fig. III.10). The remaining staff included wage analysts, management and program analysts, and clerical and other support staff. Less than 2 percent of ESA’s staff worked part time. Approximately 64 percent of ESA’s field office staff were at the GS-11, –12, and –13 grade levels. Staff at the GS-5 and –6 pay levels constituted about 12 percent of ESA’s field office workforce (see fig. III.11). From one to five ESA staff worked in almost 70 percent of the 280 U.S. localities with ESA staff (see table III.6). GS-11 and –12 wage/hour compliance specialists primarily represented ESA in those localities with only one ESA staff person. Seventeen localities had over 30 ESA employees—they generally were associated with an ESA regional office. In fiscal year 1995, ESA field offices occupied space in 335 buildings throughout the United States, totaling 769,237 square feet. About 272,200 square feet was owned by GSA and about 497,000 square feet was leased from privately owned sources. ESA used about 671,000 square feet of this space for offices and the remainder for storage and other activities (see fig. III.12). At 138 of the 335 buildings ESA occupied in fiscal year 1995, other Labor components were also located at the same address. Field costs for ESA totaled $179.2 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $14.9 million, which was about 8 percent of ESA’s total field office costs. Costs for staff salaries and benefits totaled $156 million and other costs totaled $8.3 million, which were about 87 and 5 percent, respectively, of ESA’s total field office costs. By fiscal year 1999, Labor plans to have completed the reorganization of ESA’s WHD and OFCCP field operations. WHD’s eight regional offices will be reduced to five through the consolidation of its current (1) Philadelphia, New York, and Boston regional offices into a northeast regional office and (2) Chicago and Kansas City regional offices into a single office. Labor also plans to reduce the number of WHD district offices and increase its area offices. This will essentially involve redefining the duties of about 10 district offices to provide more frontline services and fewer management-related activities. Also, through employee attrition, management/supervisory staff buyouts, and the conversion of supervisory positions to senior technical positions, Labor plans to reduce its WHD staff and management-to-staff ratios to increase the proportion of frontline WHD employees to better serve its many customers. Four of OFCCP’s regional offices will be combined into two. Its current Chicago and Kansas City regional offices will be merged to form one new office, and its Dallas and Denver regional offices will be combined to form the other. Also, Labor plans to eliminate at least two OFCCP district offices. OFCCP will continue to review additional district offices to determine whether more can be converted into area offices by fiscal year 1999. The Employment and Training Administration (ETA) fulfills responsibilities assigned to Labor that relate to employment services, job training, and unemployment insurance. ETA administers, among others, the following: Federal Unemployment Insurance System, U.S. Employment Service, federal activities under the National Apprenticeship Act, Adult and Youth Training Programs (title II of the Job Training Partnership Act), the dislocated worker program under the Economic Dislocation and Worker Adjustment Assistance Act (title III of the Job Training Partnership Act), Job Corps (title IV of the Job Training Partnership Act), federal activities under the Worker Adjustment and Retraining Notification Act, the Trade Adjustment Assistance Program, and the Senior Community Service Employment Program (title V of the Older Americans Act). ETA’s 146 field offices (see fig. III.13) help to administer the nation’s federal-state employment security system; fund and oversee programs to provide job training for groups having difficulty entering or returning to the workforce; formulate and promote apprenticeship training standards and programs; promote school-to-work initiatives, one-stop career centers, and labor market information; and conduct continuing programs of research, development, and evaluation. According to Labor, several reasons exist for the field structure of ETA. To fulfill its mission, many of ETA’s regional and field offices are located in the same area so as to reduce overhead and administrative costs. Their locations facilitate direct and more frequent contact on site with states and local entities and the provision of timely information and feedback. Field office staff can provide on-site technical assistance, which would be more costly, infrequent, and less efficient if staff were more centralized. The close proximity of ETA staff to its state and local grantees and contractors is essential to the agency’s ability to oversee and maximize program integrity while minimizing travel costs. Table III.7 provides key information about the 10 regional, 50 state, 8 area, and 78 local offices that constituted ETA’s field office structure. ETA’s various field offices generally support its major program activities—training and employment services, Job Corps, unemployment insurance, and apprenticeship training through the Bureau of Apprenticeship and Training (BAT). The regional offices perform activities related to the Job Training Partnership Act and several other programs. The balance of ETA’s field offices—state, area, and local offices—are part of the BAT program. BAT is unique to ETA in that it provides consultant services to employers, employer groups, unions, employees, and related business and trade associations using private-sector resources to improve the skills of the workforce. The staff develop voluntary standards and agreements between the parties and work to ensure that the standards for work, training, and pay are mutually achieved for apprentices and their sponsors. ETA’s field offices perform the following functions: Regional offices. Regional office staff ensure the efficient administration of the training and employment services operated by state grantees under the Job Training Partnership Act, Wagner-Peyser Act, Trade Act, and North American Free Trade Agreement; supports state and local one-stop career center and school-to-work system building efforts; and provide consultation and guidance to state grantees for the planning and operation of state and federal unemployment insurance and related wage-loss compensation programs. The BAT regional offices are responsible for directing, planning, and administering effect BAT programs and ensure that ETA’s school-to-work initiatives are incorporated in training programs when feasible. Job Corps regional offices ensure that centers are safe learning and living environments for students; implement program policies; and coordinate with schools and training programs to support Job Corps programs. State offices. State office staff develop, coordinate, promote, and implement apprenticeship and allied employment and training programs in industry on a statewide basis. They also provide technical assistance to industry, management, labor, education, and other groups concerned with economic development within a state. Area and local offices. Staff in these offices perform the same basic functions done by state office staff, except on a less-than-statewide basis. ETA staff represented 24 different professional and administrative job categories. Most of ETA’s field office workforce was composed of manpower development specialists, apprenticeship training representatives, unemployment insurance program specialists, and secretaries (see fig. III.14). The remaining staff included job categories such as alien certification clerk and apprenticeship training assistant, computer specialist, executive assistant, and program analyst. Approximately 62 percent of ETA’s field office staff were middle managers: GS-11s, –12s, and –13s. Staff at the GS-5 and –6 pay levels constituted about 15 percent of ETA’s field office workforce (see fig. III.15). From one to five ETA staff persons worked in 87 of the 98 localities with ETA staff (see table III.8). Ten localities—representing the locations of ETA’s regional offices—had over 30 ETA employees. Generally, apprenticeship training representatives in grades GS-11, –12, and –13 provided the ETA presence in those localities with only one staff person. In fiscal year 1995, ETA field offices occupied space in 127 buildings throughout the United States, totaling 226,649 square feet. About 81,600 square feet was owned by GSA and 145,046 square feet was leased from privately owned sources. ETA used about 93 percent of this space for offices and the remainder for storage and other activities (see fig. III.16). At 98 of the 127 buildings ETA occupied in fiscal year 1995, other Labor components were also located at the same address. ETA’s field office costs totaled $66.4 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. ETA paid more for these costs than five of the other nine Labor components. Rent and utility costs were about $5 million, which was 7 percent of total rent and utility costs for all ETA field offices. Costs for staff salaries and benefits totaled $51.4 million and other costs totaled $10.1 million, which were about 77 and 15 percent, respectively, of ETA’s total field office costs. ETA has begun to reassess its field structure and is considering realigning and/or consolidating certain programs, functions, services, and field offices. ETA is currently reevaluating its operations in the 10 federal region cities with a view to locating them in the same area or building where feasible. ETA has reduced its total staff by 20 percent, well above its streamlining goal of a 12-percent reduction in total staffing by fiscal year 1999. The primary mission of the Mine Safety and Health Administration (MSHA) is to protect the safety and health of the nation’s miners who work in coal, metal, and nonmetal mines. MSHA’s 155 field offices (see fig. III.17) develop and enforce mandatory safety and health standards, ensure compliance with the standards, conduct inspections, assess civil penalties for violations, and investigate accidents. In addition, MSHA field offices provide assistance in the development of safety programs, and improve and expand training programs in cooperation with the states and the mining industry. In conjunction with the Department of the Interior, MSHA contributes to the expansion and improvement of mine safety and health research and development. MSHA primarily performs its enforcement and assessment functions through a complement of offices known within the component as district, subdistrict, and field offices, not regional offices. According to MSHA, the mine community as well as Labor benefits from these offices. The geographical distribution of MSHA’s field offices facilitates the efficient and effective operation of MSHA’s safety and health programs. The distribution of the field offices minimizes the travel time and costs of the inspection and technical staff, which increases the time available for inspection and compliance assistance activities. Also, the proximity of the field offices to the nation’s mines allows MSHA to be more accessible to the mining community and respond quickly to mine emergencies. Table III.9 provides key information about the 16 district offices, 17 subdistrict offices, 108 field offices, 11 field duty stations, and one training center that compose MSHA’s field structure. MSHA’s various offices generally perform the following functions: District offices. A district office is responsible for keeping its fingers on the pulse of all active mining. One set of MSHA district offices monitors coal mines, while the other oversees the activities of mines that produce metals and nonmetals. A district office provides the managerial oversight and administrative support for the subdistrict and field offices. Subdistrict offices. These offices provide the direct technical supervision of the field offices and field duty stations. Field offices. A field office is under the direct supervision of a subdistrict office. Field office staff generally inspect coal or metal/nonmetal mines or supervise those who do. Field duty stations. These offices generally perform the same functions as field offices, except no supervisors are on site. One or two mine inspectors staff a field duty station and are supervised by a field office. Training center. The National Mine Health and Safety Academy in Beckley, West Virginia, is responsible for providing training services and training programs for miners and MSHA employees. Other offices. The Safety and Health Technology Center in Bruceton, Pennsylvania, provides engineering and scientific capability to assist MSHA, states, and the mining industry in identifying and solving technological mine safety and health problems. MSHA’s Approval and Certification Center in Triadelphia, West Virginia, approves, certifies, and accepts machinery, instruments, materials, and explosives for underground surface mines. Both centers report to MSHA headquarters. Because most of the nation’s coal mines are located in the Appalachian area, 8 of the 10 district offices for Coal Enforcement were located in Pennsylvania, Virginia, West Virginia, and Kentucky in fiscal year 1995. The district offices for coal mines west of the Mississippi and in the north central part of the nation were in Colorado and Indiana. However, the district offices for metal/nonmetal mines were more widely distributed because these mines are more widely dispersed throughout the country. According to MSHA, it continually assesses its field structure to best ensure the safety and health of U.S. mine workers and, when necessary, adjusts its office locations to match shifts in mining activity. According to Labor, district offices are generally staffed by district managers, technical staff and assistants, and administrative workers, while field offices are generally staffed by inspectors. Larger field offices have a supervisor inspector as well as a clerk. MSHA employed nearly 20 percent of all Labor on-board field office staff in fiscal year 1995. MSHA staff represented 50 different professional and administrative job categories. Mine safety and health inspectors and engineers made up over 60 percent of MSHA’s field office workforce (see fig. III.18). The remaining staff supported these professionals and included job categories such as mine assessment/health clerk, office automation clerk, engineer technician, computer specialist, and financial management specialist. Approximately 71 percent of MSHA’s field office staff were at the GS-11, –12, and –13 levels, with half of all MSHA field office staff at the GS-12 level. Staff at the GS-5 and –6 pay levels composed about 14 percent of MSHA’s field office workforce (see fig. III.19). From 6 to 20 staff persons worked in 60 percent of the U.S. localities with MSHA staff (see table III.10). The 15 localities with over 30 staff generally supported MSHA’s coal and metal/nonmetal district offices. GS-11 and –12 coal mine safety and health inspectors primarily provided the MSHA presence in the seven localities with one person each. In fiscal year 1995, MSHA field offices occupied space in 123 buildings throughout the United States, totaling 575,865 square feet. About 78,900 square feet was owned by GSA, and about 496,919 was leased from privately owned sources. MSHA used 429,938 square feet for offices and the remainder for storage and other uses such as training, laboratory testing, and conferences (see fig. III.20). At 20 of the 123 buildings MSHA occupied in fiscal year 1995, other Labor components were also located at the same address. MSHA field office costs totaled $173.3 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were about $8.8 million, which was 5 percent of total field costs for MSHA. Costs for staff salaries and benefits totaled $135.3 million and other costs totaled $29.2 million, which were about 78 and 17 percent, respectively, of MSHA’s total field office costs. During fiscal year 1995, MSHA began eliminating coal mine safety and health subdistrict offices as part of a multi-year effort to restructure the field structure to eliminate a managerial level. Elimination of the metal and nonmetal subdistrict offices was completed in previous years. In July 1993, Labor Secretary Reich created the Office of the American Workplace (OAW) to provide a national focal point for encouraging the creation of high-performance work place practices and policies. During fiscal year 1995, OAW’s mission was implemented by three major subunits: the Office of Work and Technology Policy, the Office of Labor- Management Programs, and the Office of Labor-Management Standards (OLMS). Of these three subunits, OLMS is the only one supported by field offices (see fig. III.21). OAW’s 34 field offices help to administer and enforce provisions of the Labor-Management Reporting and Disclosure Act of 1959 (LMRDA), as amended, that establish standards for labor union democracy and financial integrity and require reporting and public disclosure of union reports. They also help to administer related laws, which affect labor organizations composed of employees of most agencies of the federal executive branch and certain other federal agencies subject to similar standards of conduct. To protect the rights of members in approximately 48,000 unions nationwide, OAW provides for public disclosure of reports required by the LMRDA, particularly labor organization annual financial reports; conducts compliance audits to ensure union compliance with applicable standards; conducts civil and criminal investigations, particularly in regard to union officer elections and union funds embezzlement; and provides compliance assistance to union officials and union members to promote knowledge of and conformity with the law. According to Labor, several factors affected its decision to establish OLMS field offices, such as the number and size of labor unions located in a geographic area and the level of statutorily mandated work historically performed in the area. Field offices allow staff to be within close proximity to the work and generally reduce travel costs. Table III.11 provides key information about the 10 regional offices, 18 district offices, and 5 resident investigator offices. OAW’s various field offices generally perform the following functions: Regional offices. A regional office directly supervises the operations of specific district and/or resident offices. A regional office also is staffed with investigators who conduct (1) civil and criminal investigations, particularly with regard to union officer elections and union funds embezzlement, and (2) investigative audits of unions. District offices. A district office is responsible for conducting OLMS’ investigative work and providing public disclosure of reports that are in accordance with statutory requirements and guidance and assistance to labor organizations and others to promote compliance with the laws and requirements of the agency and the LMRDA. Resident investigative offices. Investigators in these 1- to 2-person offices carry out OAW’s activities performed at the regional and district offices, but in selected locations. The offices typically have no on-site manager or clerical support person. OAW employed 2.3 percent of all Labor on-board field office staff in fiscal year 1995. OAW staff represented six different professional and administrative job categories. Investigations analysts made up over 80 percent of OAW’s field office workforce (see fig. III.22). The remaining staff included auditors, computer clerks, and management assistants. Almost 80 percent of OAW’s field office staff were frontline workers: GS-11s, –12s, and –13s. Staff at the GS-5 and –6 pay levels made up about 11 percent of OAW’s field office workforce (see fig. III.23). About 2 percent of OAW’s field staff were part-time employees. From 6 to 10 staff worked in 39 percent of the 33 U.S. localities with OAW staff (see table III.12). Generally, GS-12 investigations analysts provided the OAW presence in those localities with only one staff person. According to GSA, OAW field offices occupied space in 38 buildings throughout the United States, totaling 67,465 square feet in fiscal year 1995. Of this total, 28,953 square feet was owned by GSA, and 38,512 square feet was leased from privately owned sources. OAW used 78 percent of this space for offices and the remainder for storage and other activities (see fig. III.24). At 31 of the 38 buildings OAW occupied in fiscal year 1995, other Labor components were also located at the same address. OAW field costs totaled $18.6 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $1.3 million, which was 7 percent of total field office costs for OAW. Costs for staff salaries and benefits totaled $14.1 million and other costs totaled $3.2 million, which were about 76 and 17 percent, respectively, of OAW’s total field office costs. OAW is in the process of reorganizing to streamline field office management and operations. The target field structure consists of 20 field offices, some with resident investigative offices, divided into five geographic regions. The reorganization is expected to eliminate two and, in some instances, three layers of program review, significantly expand supervisory spans of control, and increase the number of resident investigative offices. A GM-15 regional manager with redefined responsibilities will oversee each region. Consolidation and restructuring will eliminate 5 GM-15 regional director, all 10 GM-14 deputy regional director, and 22 GM-13 supervisory investigator or district director positions. District offices will be headed by a single manager, a GM-13 or GM-14 office director, except that the Washington, D.C., and New York offices will have two office managers—a district director and a supervisory investigator— because of the large numbers of international unions in those office jurisdictions and the resulting level of complex casework, including International Compliance Audit program cases. All but those two GM-13 supervisory investigator positions will be eliminated. Most GM-13 supervisory investigator positions and GM-13 district director positions in small offices will be converted to GS-13 senior investigator positions, and a number of additional such positions will be established. Senior investigators primarily will have case-related duties and will serve as team leaders and resource persons to other investigators. In offices without on-site managers, senior investigators will also serve as the local OAW representative. No senior investigator will have managerial functions. On-site manager positions will be eliminated in the Minneapolis district office and the Kansas City regional office. The Puerto Rico and Honolulu offices will retain small investigator staffs without supervisory or clerical staff, but because of their relative geographic isolation, will continue to maintain statutorily required reports for public disclosure. Without eliminating OAW’s presence in areas where offices now exist, including all Labor regional cities, the number of full-service regional and district offices will be reduced by converting a number of small offices to resident status without public report disclosure responsibilities. OAW will convert full-service offices in Houston, New Haven, Tampa, Miami, and Newark to resident investigative offices. OAW will continue to consider whether additional resident investigative offices are needed on the basis of workload, customer service needs, and travel cost reductions. These types of offices will be staffed with one or two investigators and will have no on-site mangers or clerical support, as is typical now among investigative resident offices. The Office of Inspector General (OIG) is responsible for providing comprehensive, independent, and objective audits and investigations to identify and report program deficiencies and improve the economy, efficiency, and effectiveness of Labor operations. The OIG is also responsible for ensuring employee and program integrity through prevention and detection of criminal activity, unethical conduct, and program fraud and abuse. The OIG provides Labor participation in investigations under the Department of Justice’s Organize Crime Strike Force Program. The OIG fulfills its responsibilities through two major offices—Audit and Investigation—that are supported by 44 field offices (see fig. III.25). The primary mission of the Office of Audit is to conduct and supervise audits of (1) programs administered by Labor and (2) internal operations and activities. Two divisions within the Office of Investigation—Program Fraud and Labor Racketeering—carry out the mission of this office. The primary responsibility of the Division of Program Fraud is to investigate allegations of fraud, waste, and abuse reported by any citizen or Labor program participant or employee. The Division of Labor Racketeering conducts investigations regarding employee benefit plans, labor- management relations, and internal union affairs. The OIG conducts many of its mission-related activities at its field offices for several reasons. According to Labor, the Office of Audit’s field structure provides the greatest oversight of Labor programs because it mirrors the Department’s decentralized structure and minimizes travel costs. The field structure of the Division of Program Fraud was set up to be compatible with Labor’s regional cities so that Program Fraud staff could have immediate access to Labor program managers. Because travel is substantial for Program Fraud staff due to the large geographic areas covered by Labor’s many field offices and programs, Labor believes that establishing central field office locations in major cities provides the most economic travel possible. The Division of Labor Racketeering has offices in those cities that have historically had serious organized crime problems. Labor Racketeering agents, therefore, travel little because most of their work is in the cities where offices have been established. Table III.13 provides key information about the 9 operating offices, 23 resident offices, and 11 field offices that support the OIG’s operations. OIG’s various field offices generally perform the following functions: Operating offices (Office of Audit). These offices lead and conduct economy and efficiency audits of Labor programs and assess the (1) financial management and performance measures of Labor programs, (2) program and financial results, and (3) organizations and operations of Labor grantees and contractors. Resident offices. Resident office staff examine fraud complaints reported on the hotline or in person. These types of offices are also staffed with labor racketeering investigators. Field offices. Field office staff develop and investigate labor racketeering cases in the largest organized crime centers in the United States and supervise the activities of investigators in selected resident offices. OIG staff represented 11 different professional and administrative job categories. Criminal investigators made up almost half of OIG’s field office workforce (see fig. III.26). The remaining staff were auditors and other clerical and support staff. GS-11s, –12s, and –13s represented almost 66 percent of the OIG’s field office workforce. Staff at the GS-5 and –6 pay levels constituted less than 6 percent of the OIG’s field staff (see fig. III.27). Less than 2 percent of the OIG’s total on-board staff worked part time. From 1 to 10 Labor staff represented the OIG in over 75 percent of the 28 U.S. localities with OIG staff (see table III.14). A GS-12 or –13 criminal investigator and a GS-7 investigator assistant provided the OIG presence in four localities with only one staff person. Four localities had over 30 OIG employees—these localities generally corresponded with the locations of the OIG’s Office of Audit operating offices. In fiscal year 1995, the OIG maintained five field offices each in Washington, D.C., and New York. According to GSA data, OIG field offices occupied space in 32 buildings throughout the United States in fiscal year 1995, totaling 79,977 square feet. About 36,500 square feet of space was owned by GSA and 42,522 was leased from privately owned sources. OIG used 67,867 square feet for offices and the remainder for storage and other uses (see fig. III.28). At 24 of the 32 buildings OIG occupied in fiscal year 1995, other Labor components were also located at the same address. Field office costs for the OIG totaled $28.9 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $1.8 million, which was 6 percent of total field office costs for the OIG. Costs for staff salaries and benefits totaled $23.8 million and other costs totaled $3.1 million, which were about 82 and 11 percent, respectively, of the OIG’s total field office costs. Plans to restructure OIG’s entire field structure were in process in fiscal year 1995 resulting in the elimination of eight field offices in fiscal year 1996 and a realignment of management functions and fewer GM-15 positions. OIG will evaluate its Washington, D.C., field offices. In fiscal year 1996, OIG reorganized the five New York field offices and has not replaced any losses at one-person offices. The primary mission of the Occupational Safety and Health Administration (OSHA) is to ensure a work environment for American workers that is free from safety and health hazards. Staff at the 107 field offices that support OSHA (1) inspect work places to ensure compliance with health and safety standards and (2) provide advice, assistance, and services to employers and employees to prevent work place injuries and illnesses. OSHA field offices also provide technical assistance as needed to the 25 states with their own—yet federally approved—occupational safety and health programs. The field offices also monitor work place activities not covered by the state plans. Figure III.29 shows the locations of OSHA field offices. Among OSHA’s field offices are a training facility, two laboratories, and five resource centers. Figure III.29: Locations of OSHA Field Offices, Fiscal Year 1995 States With Occupational Safety and Health Programs (Programs in New York and Connecticut cover only state and local government employees.) OSHA conducts most of its mission-related activities at its field offices for several reasons. According to OSHA officials, the field offices provide greater visibility and access to employers and employees and allow OSHA to locate staff with the necessary expertise near specific industries (such as the petrochemical companies in Houston, Texas). As part of its responsibility to monitor state occupational safety and health programs, OSHA maintains area offices in the state capitals of the 25 states with their own programs. In those states with no state occupational safety and health programs, OSHA attempts to establish field offices that are centrally located near large concentrations of industrial and other work sites. The location of OSHA area offices near industrial concentrations not only permits OSHA to effectively schedule and use staff and travel resources but also enables its staff to respond rapidly to accidents and imminent danger notifications. Finally, federal policy and other considerations have dictated that field offices be placed in certain central city locations. Table III.15 provides key information about the 10 regional offices, 83 area offices, 6 district offices, 5 resource centers, 2 technical centers, and 1 training facility that compose OSHA’s field office structure. OSHA’s various field offices generally perform the following functions: Regional offices. A regional office provides the guidance and administrative support for all of the other OSHA field offices operating in a particular region. Area offices and resource centers. An area office is organized geographically to serve as OSHA’s primary link to employers and employees at local work sites. Staff stationed at these types of offices perform safety and health activities, such as routine work place inspections, and provide technical assistance to employers. They also document complaints about unsafe work place practices and respond to accidents and imminent danger notifications. Offices in OSHA’s San Francisco region serve the same purpose but are identified as “resource centers” because they are organized functionally rather than geographically. District offices. A district office is a small outstation reporting to an area office. District offices provide safety and health services in geographic areas that are remote from an area office but have a concentration of work places. Technical centers. OSHA maintains these centers in Salt Lake City, Utah, and Cincinnati, Ohio. Their primary function is to analyze air and substance samples taken during work place inspections and to calibrate the equipment that the inspectors use. Training institute. This is a centrally located facility in Des Plaines, Illinois, used to train occupational safety and health personnel from OSHA, its state counterparts, and other federal safety and health professionals, as well as the public on a space-available basis. In fiscal year 1995, every state and territory had at least one OSHA field office except South Dakota, Vermont, Wyoming, and Guam (see fig. III.29). OSHA’s field offices with the largest numbers of staff were in the federal region cities of Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle. OSHA employed about 17 percent of all Labor on-board field office staff in fiscal year 1995. OSHA staff represented almost 50 different professional and administrative job categories. Occupational safety and health managers/specialists and industrial hygienists made up approximately 66 percent of OSHA’s field office workforce (see fig. III.30). The remaining staff included safety engineers; chemists; computer specialists; program analysts; accountants; and clerical workers, such as safety/health assistants and clerks, program analysts, and secretaries. Approximately 70 percent of OSHA’s field office staff were at the GS-11, –12, and –13 grade levels. Staff at the GS-5 and –6 pay levels constituted about 13 percent of OSHA’s field office workforce (see fig. III.31). Less than 1 percent of OSHA’s on-board staff in fiscal year 1995 worked part time. From 11 to 30 staff persons worked in 59 percent of the 97 U.S. localities with an OSHA presence (see table III.16). Thirteen localities—which generally represented the locations of OSHA’s regional offices—had over 30 OSHA employees. In fiscal year 1995, OSHA field offices occupied space in 115 buildings throughout the United States, totaling 550,535 square feet. Almost a third of the space (or 115,804 square feet) was owned by GSA, and almost 80 percent (or 434,731 square feet) was leased from privately owned sources. OSHA used about 72 percent of this space for offices and the remainder for storage and other activities (see fig. III.32). At 61 of the 115 buildings OSHA occupied in fiscal year 1995, other Labor components were located at the same address. Field office costs for OSHA totaled $146 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $10.7 million, which was 7 percent of OSHA’s total field office costs. Costs for staff salaries and benefits totaled $104.4 million and other costs totaled $31 million, which were about 72 and 21 percent, respectively, of OSHA’s total field office costs. None. The primary mission of the Pension and Welfare Benefits Administration (PWBA) is to help protect the retirement and benefit security of America’s workers as required under the Employee Retirement Income Security Act of 1974 (ERISA) (29 U.S.C. 1000 note) and the Federal Employees’ Retirement System Act. PWBA is charged with ensuring the responsible management of nearly 1 million pension plans and 4.5 million health and welfare plans. It also manages a vast private retirement and welfare benefit system. PWBA’s major activities include evaluating and monitoring the operations of private sector pensions. PWBA conducts many of its mission-related activities through its 15 field offices (see fig. III.33). PWBA’s field structure facilitates customer assistance to pension plan participants and beneficiaries in major metropolitan areas. Decisions about the number and location of PWBA field offices are based on several factors: the number of employee benefit plans in a locality, the locations of major financial centers, and the existing Labor administrative support structure. Table III.17 provides key information about PWBA’s 10 regional offices and 5 district offices. PWBA’s field offices generally perform the following functions: Regional offices. These offices conduct investigations of employee benefit plans. When civil violations of title I of ERISA are found, the regional office staff seek voluntary corrections and or recommend and support litigation by SOL. Criminal investigations are conducted by staff at the direction of U.S. Attorneys’ offices which litigate the criminal cases. Regional staff also provide assistance to employee benefit plan participants and professionals who contact the office with questions or complaints. District offices. A district office carries out the same enforcement and customer service functions as a regional office. District office staff are directly supervised by an affiliated regional office. District offices, which have smaller staffs, provide a physical presence in select regions that may be larger geographically. According to Labor, this minimizes the travel time of investigators who conduct on-site investigations as well as provide a presence in additional metropolitan areas. PWBA staff represented 11 different professional and administrative job categories. Over 80 percent of PWBA’s field office workforce was composed of investment/pension specialists and auditors (see fig. III.34). The remaining staff were in job categories that included employee benefit plan clerk or assistant, secretary, and computer specialist. Sixty-two percent of PWBA’s field office staff were in grades GS-11 through –13. Staff at the GS-5 and –6 pay levels constituted about 10 percent of PWBA’s field office workforce (see fig. III.35). Less than 3 percent of PWBA total on-board staff worked part time. Table III.18 shows that six or more staff persons provided a PWBA presence in 15 U.S. localities. Localities with 21 or more PWBA staff generally represented the component’s regional offices in these areas. In fiscal year 1995, PWBA field offices occupied space in 17 buildings throughout the United States, totaling 75,129 square feet. GSA owned 9,068 square feet of this space, and 66,061 square feet were leased from private sources. According to GSA, PWBA used 65,321 square feet of its space in the field for offices and the remainder for storage and other purposes—such as conference and training activities and food service (see fig. III.36). At 12 of the 17 buildings PWBA occupied in fiscal year 1995, other Labor components were also located at the same address. PWBA field office costs totaled $27.5 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were $1.6 million, which was about 6 percent of total field office costs for PWBA. Costs for staff salaries and benefits totaled $21.8 million, and other costs totaled $4.1 million, which were about 79 and 15 percent, respectively, of PWBA’s total field office costs. None. The Veterans’ Employment and Training Service (VETS) is responsible for administering veterans’ employment and training programs and activities to ensure that legislative and regulatory mandates are accomplished. Its primary mission is to help veterans, reservists, and National Guard members to secure employment and their associated rights and benefits through existing programs and the coordination and implementation of new programs. VETS strives to ensure that these programs are consistent with the changing needs of employees and the eligible veteran population. VETS conducts much of its mission-related activities from 108 field offices (see fig. III.37) for several reasons. According to Labor, the field offices are strategically located to minimize travel costs as well as to facilitate interagency liaisons and communications. With VETS’ field offices located in 80 percent of America’s 100 largest cities, field staff are close to employers, which helps to prevent reemployment rights claims and, when claims are made, facilitates their resolution. Field offices also allow VETS staff to perform monitoring and technical assistance activities more effectively and efficiently with reduced travel costs. Table III.19 provides key information about the 10 regional offices and 98 state offices that compose VETS’ field structure. In fiscal year 1995, VETS maintained regional offices in each of the federal region cities: Boston, New York, Philadelphia, Atlanta, Chicago, Dallas, Kansas City, Denver, San Francisco, and Seattle. In addition, VETS had a field office presence in every state—sometimes with as many as seven offices per state, such as Texas. VETS’ field offices generally perform the following functions: Regional offices. Regional office staff primarily (1) resolve claims made by veterans, reservists, and National Guard members when their reemployment rights have been denied by their civilian employers, (2) evaluate compliance by state employment security agency offices with veterans’ services requirements as dictated by federal regulations through on-site visits; (3) and monitor the performance of VETS’ grantees. State offices. State office staff work closely with and provide technical assistance to state employment security agencies and Job Training Partnership Act grant recipients to ensure that veterans are provided the priority services required by law. They also coordinate with employers, labor unions, veterans service organizations, and community organizations through planned public information and outreach activities. In addition, they give federal contractors management assistance in complying with their veterans affirmative action and reporting obligations. VETS staff represented five different professional and administrative job categories. Veterans employment representatives and program specialists made up approximately 70 percent of VETS’ field office work force (see fig. III.38). The remaining staff included veterans reemployment rights compensation specialists, clerks, and other support staff. Approximately 42 and 25 percent of VETS’s field office staff were GS-12s and –13s, respectively. Staff at the GS-5 and –6 pay levels constituted about 24 percent of VETS’ field office workforce (see fig. III.39). Less than 1 percent of VETS’ on-board staff worked part time. From one to five VETS staff were located in 83 localities, and about 38 percent of these locations were staffed by one person. Generally, GS-12 veterans employment representatives provided the VETS presence in the localities with only one person. No single locality had more than 10 VETS staff stationed there (see table III.20). In fiscal year 1995, VETS field offices occupied space in 13 buildings throughout the United States, totaling 12,811 square feet. GSA owned 5,634 square feet of VETS field office space, and 7,177 square feet were leased from private sources. VETS used 12,423 square feet of its total field space for offices and the remainder for other uses (see fig. III.40). At 11 of the 13 buildings VETS occupied in fiscal year 1995, other Labor components were also located at the same address. Field office costs for VETS totaled $16.7 million in fiscal year 1995. These costs included rent and utilities; staff salaries and benefits; and other costs, such as equipment, supplies, and materials. Rent and utility costs were about $289,839, which was 2 percent of VETS’ total field office costs. Costs for staff salaries and benefits totaled $13.4 million, and other costs totaled $3 million, which were 80 and 18 percent, respectively, of VETS’ total field office costs. VETS is awaiting congressional approval to reduce the number of field offices that support its operations. VETS has also reduced staff through attrition. South Carolina (continued) Table V.1: Total Labor Field Offices and Staff by Federal Region 835 10,655 2 2 2 1 1 (continued) 10 (continued) 00 1 (continued) 0 (continued) No official field office. Employee supervised out of another office. 2 (continued) 1 (continued) 1 1 1 (continued) 1 1 2 32 2 (continued) 1 2 1 (continued) Outstationed staff working out of home. OSHA: Potential to Reform Regulatory Enforcement (GAO/T-HEHS-96-42, Oct. 17, 1995). Federal Reorganization: Congressional Proposal to Merge Education, Labor, and EEOC (GAO/HEHS-95-140, June 7, 1995). Department of Education: Information on Consolidation Opportunities and Student Aid (GAO/T-HEHS-95-130, Apr. 6, 1995). Departent of Labor: Rethinking the Federal Role in Worker Protection and Workforce Development (GAO/T-HEHS-95-125, Apr. 4, 1995). Workforce Reductions: Downsizing Strategies Used in Selected Organizations (GAO/GGD-95-54, Mar. 13, 1995). Labor’s Regional Structure and Trust Funds (GAO/HEHS-95-82R, Feb. 10, 1995). Department of Labor: Opportunities to Realize Savings (GAO/T-HEHS-95-55, Jan. 18, 1995). Multiple Employment Training Programs: Major Overhaul Needed to Reduce Costs, Streamline the Bureaucracy, and Improve Results (GAO/T-HEHS-95-53, Jan. 10, 1995). Department of Education: Long-Standing Management Problems Hamper Reforms (GAO/HRD-93-47, May 28, 1993). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information about the field offices supporting the Departments of Education and Labor, focusing on the field offices': (1) locations; (2) functions, staffing, space and operating costs; and (3) proposed structural changes. GAO found that: (1) in fiscal year 1995, the Department of Education had 72 field offices and the Department of Labor had 1,074 field offices; (2) Labor and Education spent a total of $867 million dollars in support of their field office operations; (3) about 94 percent of Education's field staff and 42 percent of Labor's field staff were located in ten regional cities; (4) Labor had a high concentration of staff in its field offices, reflecting the agency's general responsibilities; (5) the majority of the amount spent in supporting field offices operations was for staff salaries and benefits; and (6) Labor and Education are planning to make changes in their field office structures to improve efficiency and contain administrative costs.
The United States has a long tradition of providing benefits to those injured in military service, but the role of the federal government in providing for the health care needs of other veterans has evolved and expanded over time. In the nation’s early years, the federal role was limited to direct financial payments to veterans injured during combat; direct medical and hospital care was provided by the individual colonies, states, and communities. The Continental Congress, seeking to encourage enlistments during the Revolutionary War, provided federal compensation for veterans injured during the war and their dependents. Similarly, the first U.S. Congress passed a veterans’ compensation law. The federal role in veterans’ health care significantly expanded during and following the Civil War. During the war, the government operated temporary hospitals and domiciliaries in various parts of the country for disabled soldiers until they were physically able to return to their homes. Following the war, the number of disabled veterans unable to cope with the economic struggle of civilian life became so great that the government built a number of “homes” to provide domiciliary care. Incidental medical and hospital care was provided to residents for all diseases and injuries. The modern era of veterans’ benefits began with the onset of World War I. During World War I, a series of new veterans’ benefits was added: voluntary life insurance, allotments to take care of the family during military service, reeducation of those disabled, disability compensation, and medical and hospital care for those suffering from wounds or diseases incurred in the service. During World War I, Public Health Service (PHS) hospitals treated returning veterans, and, at the end of the war, several military hospitals were transferred to PHS to enable it to continue serving the growing veteran population. In 1921, those PHS hospitals primarily serving veterans were transferred to the then newly formed Veterans’ Bureau. During the 1920s, three federal agencies—the Veterans’ Bureau, the Bureau of Pensions in the Interior Department, and the National Home for Disabled Volunteer Soldiers—administered various benefits for veterans. With the establishment of the Veterans Administration in 1930, previously fragmented services for veterans were consolidated under one agency. The responsibilities and programs of the Veterans Administration grew significantly during the ensuing decades. For example, the VA health care system grew from 54 hospitals in 1930 to include 173 hospitals, more than 375 outpatient clinics, 130 nursing homes, and 39 domiciliaries in 1996; the World War II GI Bill is said to have affected the American way of life more than any other law since the Homestead Act almost a century before, and further educational assistance acts were passed for the benefit of veterans of the Korean conflict, the Vietnam era, the Persian Gulf War, and the current all-volunteer force; and in 1973, the Veterans Administration assumed responsibility for the National Cemetery System, and VA is now charged with the operation of all national cemeteries, except for Arlington National Cemetery. In 1989, the Department of Veterans Affairs was established as a cabinet-level agency. VA’s major benefits programs are divided among the Veterans Health Administration (VHA), headed by the Under Secretary for Health; the Veterans Benefits Administration, headed by the Under Secretary for Benefits, which administers compensation for service-connected disabilities, pensions for low-income war veterans, education loans, life insurance, and home loans; and the National Cemetery System, headed by a Director. Figure 1 shows the organizational structure of VA. In our testimony 2 years ago, we pointed out that VA lagged far behind the private sector in improving the efficiency of its health care system. Specifically, we said that the VA system lacked oversight procedures to effectively assess the operations of its medical systems to shift significant resources among medical centers to provide consistent access to VA care, information systems capable of effectively coordinating patient care among VA facilities, and a corporate culture that valued economy and efficiency. VA has made significant progress in improving the efficiency of its health care system. For example, it has consolidated management of nearby hospitals to reduce administrative costs, increased the use of ambulatory surgery, and reduced average lengths of stay. Under the leadership of the Under Secretary for Health, VA has a new emphasis on both economy and efficiency and customer service. Two years ago, we told you that VA’s central office lacked much of the systemwide information it needed to effectively (1) monitor the performance of its medical centers, (2) ensure that corrective actions are taken when problems are identified, and (3) identify and disseminate information on innovative programs. Since then, VA has established a new decentralized management structure and established performance measures to hold managers accountable for improving efficiency and ensuring the quality of services. VA reorganized its health care facilities into 22 VISNs. This reorganization contains several elements that hold promise for providing the management framework needed to realize the system’s full potential for efficiency improvements. First, VA plans to hold network directors accountable for VISNs’ performance by using, among other things, cost-effectiveness goals and measures that establish accountability for operating efficiently to contain or reduce costs. Second, the Under Secretary for Health (1) distributed criteria that could guide VISN directors in developing the types of efficiency initiatives capable of yielding large savings and (2) gave VISN and facility directors authority to realign medical centers to achieve efficiencies. Finally, VHA developed a new method for allocating funds to its VISNs with the intent of creating additional incentives to improve efficiency. Consistent with the requirements of GPRA, VHA established five basic goals for its health care system. These goals are to provide excellence in health care value, provide excellence in service as defined by customers, provide excellence in education and research, be an organization that is characterized by exceptional accountability, and be an employer of choice. Under each goal, VHA established objectives and performance measures for gauging progress toward meeting both the specific objectives and overall program goals. For example, VHA’s performance measures include goals to decrease the number of bed-days of care provided per 1,000 unique users by 20 percent from the 1996 level, increase the percentage of patients reporting their care as “very good to excellent” by 5 percent annually, enroll 80 percent of patients in primary care, and increase the number of medical care residents trained in primary care. Contracts with individual VISN directors reflect these goals and performance measures. In addition, each VISN has developed a business/strategic plan. The plans are generally organized around the five broad goals. Two years ago, we testified that VA could reduce inconsistencies in veterans’ access to care by better matching medical centers’ resources to the volume and demographic makeup of eligible veterans requesting services at each medical center. Although VA had developed a new resource allocation system, the Resource Planning and Management (RPM) system, we pointed out that the system had shifted few resources among medical centers and allocated resources on the basis of prior workload without any consideration of the incomes or service-connected status of veterans who make up that workload. VA plans to begin shifting resources among VISNs using the new system. The system is based on calculations of the cost per veteran-user in each VISN. VISNs that have the highest costs per veteran-user will lose funds, while VISNs with the lowest costs per veteran-user will get additional funds. Adjustments are included for the higher labor costs in some VISNs and for differences in the costs of medical education, research, equipment, and nonrecurring maintenance. We applaud VA’s efforts to develop a simple, straightforward method for allocating resources. However, we have the same basic concern about VERA that we had about RPM. That is, VA has not determined the “right” amount of dollars that need to be shifted to ensure equity of access. Our concern is based on the fact that VA has not adequately determined the reasons for differences between VISNs in costs per veteran-user. Without a better understanding of why the costs vary, VA cannot, with any certainty, determine the appropriate amount of resources to shift among VISNs. VA data can give starkly different pictures of the comparability of veterans’ access to VA care depending on the basis used for the comparison. For example, basing a comparison of equity of access on the percentage of the total veteran population in a VISN that is provided VA services would suggest that veterans in the Sunbelt generally have better access to VA care than do veterans from the Midwest and Northeast. Over 17 percent of veterans in VISN 18 (Phoenix) received VA services in fiscal year 1995, compared with about 8 percent of veterans in VISN 4 (Pittsburgh). Similarly, about 14 percent of veterans in VISN 9 (Nashville) received VA health care services in fiscal year 1995, compared with about 8 percent of those in VISN 11 (Ann Arbor). Such data could suggest the need to shift resources from VISNs where VA has a high market share of the veteran population to VISNs where VA has lower market shares. attempting to develop data on the demographics of the veteran population by VISN to better understand the basis for differing market shares. Other VA data suggest that VISNs in the Northeast and Midwest may receive more than their fair share of VA resources. For example, VISN 18 received $3,197 per veteran served in fiscal year 1996, compared with $4,829 per veteran served in VISN 4. Similarly, VISN 9 received $4,071, compared with $4,360 in VISN 11. Both VERA data and data from prior allocation models suggest that differences in efficiency are a major factor in the variation in spending per veteran-user. Veteran-users in VISN 3 (the Bronx) are hospitalized over three times as often as are veterans in VISN 18. In addition, VA found that VISNs that have higher costs per veteran-user also tend to have more doctors and nurses per patient, and provide more bed-days of care per patient than the VISNs with lower costs per veteran-user. sector providers rather than rely on VA for comprehensive care. For example, we found that only about half of the Medicare-eligible veterans using VA health care relied on VA for all of their care. As a result, VISNs serving higher percentages of Medicare-eligible and privately insured veterans could expect to have lower costs per veteran. Finally, differences in the extent of incidental use of VA services could affect cost per veteran-user. Incidental use could artificially decrease the VISN’s average cost of care for veterans who regularly use VA and overstate the VISN market share of the veteran population. VA also has not developed data showing that the VISNs with lower than average expenditures per veteran-user need additional funds. In other words, it has not determined how much an efficient and well-managed VISN should be spending on each veteran-user. VISNs’ draft business/strategic plans generally discuss how they will use the additional funds. Those plans have not, however, been reviewed and approved by central office. Some VISN plans indicate that the additional funds will be used to reduce waiting times or increase the number of staff per patient. Others, however, indicate that the funds will be used to attract additional users. Giving additional funds to a VISN with no strings attached appears to enable VISNs with the largest market shares of the veteran population to further expand their market share. This does not appear to be consistent with the efficient use of resources that was one of the objectives of Public Law 104-204. increases in waiting times, and changes in customer satisfaction. One way to develop a resource allocation system that would be consistent with the provisions of Public Law 104-204, easy to administer, and less subject to gaming would be to base the allocation on the veteran population in each VISN, with adjustments based on the numbers of veterans in each of the priority categories for enrollment in the VA health care system. To lessen the incentive for VISNs to target enrollment toward younger, healthier veterans with private insurance, separate rates could be established for various categories of veterans, on the basis of VA’s historical cost and utilization data. We are currently developing data to more fully explore this option. VA recognizes that VERA is not a perfect system and is continuing to explore options for improving its resource allocation methods. For example, VA, like GAO, is developing data to more fully explore the potential effects of population-based allocations. It plans, however, to go forward with allocations using VERA through fiscal year 1998 in order to provide needed financial incentives for certain VISNs to focus on efficiency improvements. Otherwise, allocations tied to historic budgets might delay needed efficiency improvements until another allocation method could be developed. Without accurate and complete cost and utilization data, VA managers cannot effectively decide when to contract for services rather than provide them directly, how to set prices for services it sells to other providers, or how to bill insurers for care provided to privately insured veterans. Accurate utilization data are also essential to help ensure quality and to prevent abuse. Since February 1994, VA has been phasing in at its facilities a new Decision Support System (DSS) that uses commercially available software to help provide managers data on patterns of care and patient outcomes as well as their resource and cost implications. While DSS has the potential to significantly improve VA’s ability to manage its health care operations, the ultimate usefulness of the system will depend not on the software but on the completeness and accuracy of the data going into the system. If DSS is not able to provide reliable information, VA facilities and VISNs will either continue to make decisions on the basis of unreliable information or spend valuable time and resources developing their own data systems. Two years ago, we recommended that VA identify data that are needed to support decision-making and ensure that these data are complete, accurate, consistent, and reconciled monthly. VA plans to begin implementing DSS at the final group of VA facilities this month. VA still, however, has not adequately focused on improving the completeness and reliability of data entered into the feeder systems. It has, however, started to reconcile DSS data on a monthly basis. Although the draft business/strategic plans developed by the 22 VISNs generally discuss goals and timetables for implementing DSS throughout the network, they identify no plans for improving the completeness and accuracy of the data feeding into DSS. In our testimony 2 years ago, we focused on four major challenges facing VA because of a rapidly changing health care marketplace. Specifically, VA was faced with unequal access to health care services because of complex VA eligibility requirements, limited outpatient facilities, and uneven distribution of resources; a continuing decline in the number of hospital patients that threatened the economic viability of its hospitals; unmet needs, including the acute care needs of uninsured veterans not living close to a VA hospital, and the needs of special care populations such as those who are blind, paralyzed, or suffering from post traumatic stress disorder; and the growing long-term care needs of an aging veteran population. Significant progress has been made in addressing the first challenge—improving veterans’ access to VA outpatient care. The remaining challenges, however, remain largely unchanged. In fact, VA’s progress in improving the efficiency of its hospitals has accelerated the decline in hospital workload, heightening the need to address the future of VA hospitals. In addition, VA’s plans to attract new users focus primarily on attracting insured and higher-income veterans with other health care options rather than on addressing the unmet needs of veterans with service-connected conditions and low-income veterans. The first major challenge facing VA health care 2 years ago was the uneven access to health care caused by complex VA eligibility requirements, limited outpatient facilities, and uneven distribution of resources. We noted at the time that veterans’ ability to obtain needed health care services from VA frequently depended on where they lived and the VA facility that served them. During the past 2 years, much progress has been made in improving veterans’ access to care. Eligibility for VA health care was expanded, eliminating the hard-to-administer “obviate the need for hospitalization” provision that limited most veterans’ access to routine outpatient care. All veterans are now eligible for comprehensive inpatient and outpatient care subject to the availability of resources. VA established community-based outpatient clinics (CBOC) to improve veterans’ access to outpatient care. Until 1995, VA required its hospitals to meet rigid criteria to establish outpatient clinics apart from the hospitals. These criteria included a minimum number of veterans to be served in a clinic and a minimum distance that clinics had to be from the VA hospitals. In encouraging its hospitals to consider establishing CBOCs, previously known as “access points,” VA eliminated many of its restrictions concerning the workload and location of proposed clinics. In addition, VA policy now encourages hospitals to provide care not only in VA-operated facilities, but also by contracting with other providers. Although only 12 CBOCs were operational by September 1996, plans had been developed to establish hundreds of additional clinics. VA’s contracting authority was revised to make it easier for VA to buy services from private providers and to sell services to the private sector. Previously, VA’s authority was restricted primarily to purchasing services from and selling services to other government health care facilities and VA’s medical school affiliates. Using its expanded contracting authority, VA is moving quickly to establish additional CBOCs. The second major challenge facing VA health care 2 years ago was the declining use of VA hospitals. Between 1969 and 1994, the average daily workload in VA hospitals declined by about 56 percent. VA reduced its operating beds by about 50 percent, closing or converting to other uses about 50,000 hospital beds. VA now finds itself increasingly a victim of its own success and faced with what to do with so much unused inpatient infrastructure. As VA’s efforts to increase the efficiency of its health care system gained momentum during the past 2 years, the decline in VA hospital use accelerated. Between fiscal years 1994 and 1996, the average daily workload in VA hospitals dropped over 20 percent (from 39,953 patients in 1994 to 31,679 in 1996). Operating beds dropped from 53,093 in 1994 to 45,798 in 1996. Hospital use in the VA system varies dramatically. Last year, we reported that the Northern California Health Care System, a part of VISN 21, was supporting the hospital care needs of its users with about 2 beds per 1,000 users. Some VISNs, however, have over 20 hospital beds per 1,000 veteran-users. As a result, further significant declines in operating beds are likely as the variation in hospital use is reduced. For example, VISN 5 (Baltimore) estimates that its acute hospital beds will have decreased by 58 percent by fiscal year 2002 (from 1,087 in fiscal year 1995 to 460 in 2002). Recent VA actions to establish preadmission reviews for all scheduled hospital admissions and continuing stay reviews for those admitted—actions we have advocated for over 10 years—should further reduce hospital use. VA may not realize the full potential from these reviews, however, unless physicians’ incentives to minimize inappropriate inpatient care are increased. VISN 5 (Baltimore), for example, uses its reviews primarily for data collection, evaluation, and monitoring. The program does not act as a gatekeeper, and inpatient care is not denied on the basis of results of the preadmission reviews. Reviews at the VISN 5 hospitals in Martinsburg, West Virginia, and Washington, D.C., show that over 50 percent of patients admitted since the program was initiated did not need acute hospital care. As workload continues to decline at VA hospitals, VA’s investment in its hospital infrastructure increasingly detracts from its ability to shift resources to other needs, such as expanding access for veterans living long distances from VA facilities. The third major challenge that faced VA health care 2 years ago was identifying and addressing the unmet health care needs of veterans. With the growth of public and private health benefits programs, more than 9 out of 10 veterans now have alternate health insurance coverage. Still, about 2.6 million veterans had neither public nor private health insurance in 1990 to help pay for needed health care services. Without a demonstrated ability to pay for care, individuals’ access to health care is restricted, increasing their vulnerability to the consequences of poor health. Lacking insurance, people often postpone obtaining care until their conditions become more serious and require more costly medical services. Most veterans who lack insurance coverage, however, are able to obtain needed hospital care through public programs and VA. Still, VA’s 1992 National Survey of Veterans estimated that about 159,000 veterans were unable to get needed hospital care in 1992 and about 288,000 were unable to obtain needed outpatient services. By far the most common reason veterans cited for not obtaining needed care was that they could not afford to pay for it. While the cost of care may have prevented veterans from obtaining care from private sector hospitals, it appears to be an unlikely reason for not seeking care from VA. All veterans are currently eligible for hospital care, and about 9 to 11 million are eligible for free care. Other veterans are required to make only nominal copayments. Many of the problems veterans face in obtaining health care services appear to relate to distance from a VA facility. For example, our analysis of 1992 National Survey of Veterans data estimates that fewer than half of the 159,000 veterans who did not obtain needed hospital care lived within 25 miles of a VA hospital. By comparison, we estimate that over 90 percent lived within 25 miles of a private sector hospital. Of the estimated 288,000 veterans unable to obtain needed outpatient care during 1992, almost 70 percent lived within 5 miles of a non-VA doctor’s office or outpatient facility. As was the case with veterans unable to obtain needed hospital care, those unable to obtain needed outpatient care generally indicated that they could not afford to obtain needed care from private providers. Only 13 percent of the veterans unable to obtain needed outpatient services reported that they lived within 5 miles of a VA facility, where they could generally have received free care. Veterans’ needs for specialized services cannot always be met through other public or private sector programs. Frequently, such services are either unavailable in the private sector, or are not extensively covered under other public and private insurance. Space and resource limits in VA specialized treatment programs can result in unmet needs, as in the following cases. Specialized VA post-traumatic stress disorder programs are operating at or beyond capacity, and waiting lists exist, particularly for inpatient treatment. Although private insurance generally includes mental health benefits, private sector providers generally lack the expertise in treating war-related stress that exists in the VA system. Inadequate numbers of beds are available in the VA system to care for homeless veterans. For example, VA had only 11 beds available in the San Francisco area to meet the needs of an estimated 2,000 to 3,000 homeless veterans. Public and private insurance do not provide extensive coverage of long-term psychiatric care. Veterans needing such services must either rely on state programs or the VA system to meet their needs. VA is a national leader both in research on and treatment and rehabilitation of people with spinal cord injuries. Similarly, it is a leader in programs to treat and rehabilitate the blind. Although such services are available in the private sector, the costs of such services can be catastrophic. Legislation enacted last year that expanded VA’s ability to contract with private sector facilities and providers gives VA an opportunity to better meet the health care needs of low-income veterans and those with service-connected conditions who previously were unable to obtain needed care because VA facilities were geographically inaccessible. Two years ago, we suggested that the VA health care system retarget resources used to provide care for higher-income veterans with nonservice-connected conditions toward lower-income veterans and those with service-connected conditions whose health care needs were not being met. VA, however, through its current legislative proposals, appears to be focusing its marketing efforts on attracting higher-income veterans with other health care options rather than using its expanded contracting authority to target its available resources toward meeting the needs of service-connected and uninsured veterans who lack other health care options. Data from VA’s Income Eligibility Verification System show that about 15 percent of the veterans using VA facilities who have no service-connected disabilities have incomes of $20,000 or more. VA could use the resources spent to provide services to such higher-income nonservice-connected veterans to strengthen its ability to fulfill its safety net mission. For example, the resources could be used to expand outreach to medically underserved populations, such as homeless expand programs that address special care needs; or contract for hospital and other service for lower-income, uninsured veterans who do not live near VA facilities. Our review of the draft strategic plans developed by the 22 VISNs, however, found little mention of plans to conduct outreach to veterans with limited health care options or special care needs. Nor did these plans specifically address expanding services for low-income uninsured veterans. The establishment of additional community-based outpatient clinics will address the unmet needs of some uninsured veterans. Most of the resources spent on CBOCs, however, will likely be spent on veterans who have other health care options. This reduces the resources available to provide services to uninsured veterans. The legislative proposals contained in VA’s fiscal year 1998 budget request would target veterans with other health care options. VA claims that it will be able to cut its per-user costs by 30 percent only if it is given funds to expand the number of veterans it serves by 20 percent and allowed to keep all of the funds it recovers from private health insurance and Medicare. The new users VA anticipates attracting either have private health insurance or are higher-income Medicare beneficiaries. The proposal to allow VA to keep all medical care cost recoveries could create strong financial incentives for VA to market its services to veterans who have no service-connected disabilities as well as private insurance. Similarly, VA is seeking authority to bill and retain recoveries from Medicare for services provided to higher-income Medicare-eligible veterans. Like recoveries from private health insurance, such Medicare subvention would create incentives for VA to market services to higher-income veterans with both Medicare and Medigap coverage rather than to lower-income Medicare-eligible veterans. VA’s proposals create the potential for its receiving duplicate payments for services provided to privately insured and Medicare-eligible veterans. In other words, unless changes are made in how VA develops its budget request, it would receive both an appropriation to cover its costs of providing services to privately insured and higher-income Medicare-eligible veterans and payments from insurers and Medicare to cover those same costs. Although the 22 VISNs’ draft strategic plans discuss efforts to increase market share and attract new users, few plans contain any mention of targeting marketing efforts to veterans potentially having the greatest need for VA services—veterans with service-connected disabilities and those with low incomes and no health insurance. As the nation’s large World War II and Korean War veteran populations age, their health care needs are increasingly shifting from acute hospital care toward nursing home and other long-term care services. But Medicare and most private health insurance cover only short-term, post-acute nursing home and home health care. Although private long-term care insurance is a growing market, the high cost of policies places such coverage out of reach of many veterans. As a result, most veterans must pay for long-term nursing home and home care services out of pocket until they spend down most of their income and assets and qualify for Medicaid assistance. After qualifying for Medicaid, they are required to apply almost all of their income toward the cost of their care. About a third of veterans are 65 years old or older, with the fastest growing group of veterans being those 85 years old or older. This older group raises particular concerns because the need for nursing home and other long-term care services increases with the age of the beneficiary population. Over 50 percent of those over 85 years of age are in need of nursing home care, compared with about 13 percent of those 65 to 69 years old. VA, like other federal agencies, could be unable to issue compensation and pension checks at the beginning of the year 2000 unless it is able to reprogram its computers to recognize the next century; veterans frequently wait over 2 years for resolution of disability compensation and pension claims; and hundreds of millions of dollars in overpayments of compensation and pension benefits are made because VBA does not focus on prevention. VA’s disability program is required by law to compensate veterans for the average loss in earning capacity in civilian occupations that results from injuries or conditions incurred or aggravated during military service. These injuries or conditions are referred to as “service-connected” disabilities. Veterans with such disabilities are entitled to monthly cash benefits under this program even if they are working and regardless of the amount they earn. In fiscal year 1995, VA paid about $11.3 billion to approximately 2.2 million veterans who were on VA’s disability rolls at that time. Over the past 50 years, the number of veterans on the disability rolls has remained fairly constant. The amount of compensation veterans with service-connected conditions receive is based on the “percentage evaluation,” commonly called the disability rating, that VA assigns to these conditions. VA uses its “Schedule for Rating Disabilities” to determine which rating to assign to a veteran’s particular condition. VA is required by law to readjust the schedule periodically on the basis of “experience.” Since the 1945 version of the schedule was developed, questions have been raised on a number of occasions about the basis for these disability ratings and whether they reflect veterans’ current loss in earning capacity. Although the ratings in the schedule have not changed substantially since 1945, dramatic changes have occurred in the labor market and in society. VA has done little since 1945 to help ensure that disability ratings correspond to disabled veterans’ average loss in earning capacity. Basing disability ratings at least in part on judgments of loss in functional capacity would help to ensure that veterans are compensated to an extent commensurate with their economic losses and that compensation funds are distributed equitably. VA, like other federal agencies, faces serious problems with its computer systems that will occur in the year 2000. This year, we added the “year 2000 computer problem” to our list of “high-risk” federal management areas. Unless agency computers are reprogrammed, the year 2000 will be interpreted as 1900. This could create a major problem for VA, beginning in January 2000, with its monthly processing of over 3 million disability compensation and pension checks, totaling about $1.5 billion, to veterans and their survivors. Unless the “year 2000” problem is corrected, VA’s computer system for processing these checks will either produce inaccurate checks, or produce no checks at all. VA would then have to process the checks manually, causing severe delays to veterans and survivors in receiving their benefits. VA needs to move quickly to (1) inventory its mission-critical systems; (2) develop conversion strategies and plans; and (3) dedicate sufficient resources to conversion, and adequate testing, of computer systems before January 1, 2000. We recently published draft guidance for agencies to use in planning, managing, and evaluating their efforts to deal with this problem. We are currently reviewing VBA’s efforts to deal with the “year 2000” problem and plan to report to the Chairman, Subcommittee on Oversight and Investigations, House Committee on Veterans’ Affairs, this spring. Slow claims processing and poor service to customers have long been recognized as critical concerns for VA. As early as 1990, VA began encouraging regional offices to develop and implement improvements in their claims processing systems; but instead of decreasing, processing times and backlogs increased. At the end of fiscal year 1994, almost 500,000 claims were waiting for a VA decision. About 65,000 of these claims were initial disability compensation claims. On average, veterans waited over 7 months for their initial disability claims to be decided; if veterans appealed these decisions, they could wait well over 2 years for a final decision. evaluation plans to allow it to judge the relative merit of its various initiatives. Without such information, VA will not have a sound basis for determining what additional changes, if any, should be made and for guiding future improvement efforts. In addition, VA did not have a formal mechanism to disseminate information about the content and effectiveness of various regional office initiatives to allow other regional offices to learn from the experiences. VA is proposing a redesign of its claims processing system that would incorporate several initiatives. VA has conducted a business process reengineering effort on its compensation and pension claims processing system. VA has also established claims processing goals that include completing original compensation claims within 53 days by eliminating unnecessary tasks, reducing the number of hand-offs involved in the process, making information technology changes, and providing additional training for rating specialists. However, it is unclear at this time how successful these initiatives will be, how they will be evaluated, and how regional offices’ experiences will be shared. VBA officials told us that the claims backlog has been reduced from 500,000 to about 326,000 as a result of VBA’s actions. Despite its responsibility to ensure accurate benefit payments, VA continues to overpay veterans and their survivors hundreds of millions of dollars in compensation and pension benefits each year. For example, at the end of 1996, VA’s outstanding overpayments exceeded $500 million. VA has the capability to prevent millions of dollars in overpayments but has not done so because it has not focused on prevention. For example, we reported in April 1995 that VA did not use available information, such as when beneficiaries will become eligible for Social Security benefits, to prevent related overpayments from occurring. Furthermore, VA did not systematically collect, analyze, and use information on the specific causes of overpayments that would help it target preventive efforts. causes of overpayments nor developed strategies for targeting additional preventive efforts. The Congress, through recent legislation, established a framework to help federal agencies (1) improve their ability to address long-standing management challenges and (2) meet the need for accurate and reliable information for executive branch and congressional decision-making. This framework includes GPRA, which is designed to improve federal agencies’ performance by requiring them to focus on their missions and goals, and on the results they provide to their customers—for veterans and their families; the CFO Act of 1990, as amended by the Government Management Reform Act, designed to improve the timeliness, reliability, usefulness, and consistency of financial information in federal agencies; and the Paperwork Reduction Act of 1995 and the Clinger-Cohen Act of 1996, which are intended to improve agencies’ ability to use information technology to support their missions and improve performance. VA has begun to implement these acts, which can help it (1) develop fully integrated information about its mission and strategic priorities, (2) develop and maintain performance data to evaluate achievement of its goals, (3) develop accurate and audited financial information about the costs of achieving VA’s results-oriented mission, and (4) improve the relationship of information technology to the achievement of performance goals. GPRA requires that agencies consult with the Congress and other stakeholders to clearly define their missions. It also requires that they establish long-term strategic goals, as well as annual goals linked to them. They must then measure their performance against the goals they have set and report publicly on how well they are doing. In addition to ongoing performance monitoring, agencies are expected to identify performance gaps in their programs, and to use information obtained from these analyses to improve the programs. Under GPRA, VA and other federal agencies must complete strategic plans by September 30, 1997. While VA has not yet completed its GPRA strategic plan, its fiscal year 1998 budget submission to the Congress includes some of the elements of the GPRA planning process. The budget submissions for both of VA’s largest components—VHA and VBA—included strategic planning documents. Both the VHA and VBA plans included overall mission statements; identification of customers and stakeholders; program goals and objectives; and performance measures related to the goals and objectives. VHA’s strategic plan, as stated in its fiscal year 1998 budget submission, is based on five goals developed in March 1996 by the Under Secretary for Health. VHA then attached objectives and performance measures to each goal. For the first goal—“Provide Excellence in Healthcare Value”—VHA stated three objectives: (1) deliver the best health care outcomes at the lowest cost to the largest number of eligible veterans, (2) change VHA from a hospital-based to an ambulatory-based system, and (3) establish primary care as the central focus of patient treatment. To measure progress toward achieving VHA’s goals, it proposed eight performance measures. For the second objective, for example, VHA plans to increase the percentage of appropriate surgical and invasive diagnostic procedures performed on an ambulatory basis from 52 percent in fiscal year 1996 to 65 percent in fiscal year 1998. VBA’s strategic planning process began in July 1995, with definitions of its mission, goals, and core performance measures. As stated in the fiscal year 1998 budget submission, VBA’s mission is to “provide benefits and services to veterans and their families in a responsive, timely and compassionate manner in recognition of their service to the nation.” To accomplish this mission, VBA has set out four goals: (1) improve responsiveness to customer needs and expectations, (2) improve service delivery and benefit claims processing, (3) ensure the best value for the available taxpayers’ dollar, and (4) ensure a satisfying and rewarding work environment. The plan is then broken down by VBA’s major program areas. For example, the Compensation and Pension program area has performance indicators to measure progress in meeting VBA’s goal of improving service delivery and benefit claims processing by reducing the processing time for original compensation and pension claims from 144 days in fiscal year 1996 to 53 days in fiscal year 2002 and raising the accuracy rate for original compensation claims from 90 percent in fiscal year 1996 to 97 percent in fiscal year 2002. We are currently reviewing VA and other agencies’ initial implementation of GPRA. As required under the legislation, we will report by June 1, 1997, on GPRA implementation and the prospects for governmentwide compliance. We would be happy to assist the Congress in reviewing draft and final VA submissions under GPRA, including strategic plans, performance plans, performance reports, evaluations, and related VA performance information. The CFO Act was designed to remedy decades of serious neglect in federal financial management and accountability by establishing a financial management leadership structure and requirements for long-range planning, audited financial statements, and strengthened accountability reporting. The act created CFO positions and a financial management structure at each of the major agencies. The CFO Act, as expanded in 1994, requires VA, as well as other major agencies, to prepare annual financial statements, beginning with those for fiscal year 1996. VA has established a sound financial management structure; in addition to the Assistant Secretary for Management, who serves as CFO, VHA and VBA each has a CFO. Also, VHA plans to have a CFO position in each of its 22 VISNs. VA met the requirement to prepare, and have audited, annual financial statements beginning with those for fiscal year 1986. VA’s response to the CFO Act has led to a number of financial management improvements, including the installation of VA’s Financial Management System, which gives VA, for the first time, an integrated financial management system; improvements in reporting of receivables and property management, due to the implementation of the financial management system, that resulted in the first issuance by a VA Inspector General of an unqualified opinion on VA’s Statement of Financial Position on September 30, 1996; and the consolidation of debt collection activities at VBA’s Debt Management Center in St. Paul, Minnesota, to take full advantage of debt management tools. The Inspector General’s audit of VA’s fiscal year 1996 financial statement disclosed six internal control weaknesses that expose VA to significant financial risks: errors in accounting for property, plant, and equipment, which could result in a future qualification of opinion if not corrected; errors by medical facilities in recording estimated amounts of unbilled services and in estimating uncollectible amounts; failure to cancel approximately $69 million in open obligations that should have been cancelled before the end of the fiscal year—funds that could have been reprogrammed and used for other valid needs if they had been identified before the appropriations expired; an outdated data processing system for VA’s life insurance programs that has the potential to adversely affect the complete and accurate processing of insurance transactions and the integrity of the financial information generated by the system; insufficient VA management emphasis on, and oversight of, VA data processing facilities to ensure that data processing systems are protected from unauthorized access and modification of data; and lack of an integrated financial accounting system for VA’s Housing Credit Assistance Program which, when coupled with the complexities of accounting requirements under credit reform, increases the risk of financial reporting error. information management in general, and the acquisition and use of information technology in particular. VA has made efforts to improve its information management systems, including the appointment of the Assistant Secretary for Management as VA’s Chief Information Officer. The Clinger-Cohen Act requires, however, that information resources management be the primary function of an agency’s chief information officer. This is not the case in VA, because the Assistant Secretary for Management is not only VA’s Chief Information Officer, but is also responsible for its Offices of Financial Management, Budget and Acquisition, and Materiel Management. The Office of Management and Budget (OMB) has questioned whether information management is the “primary function” of the Assistant Secretary for Management, and whether VA is in compliance with the Clinger-Cohen Act. In August 1996, OMB asked VA to reevaluate the placement of its chief information officer function and report within a year on how it will come into compliance with the Clinger-Cohen requirement. VBA’s information technology efforts have yielded some improvements in its hardware and software capabilities. However, our reviews of information management in VBA have identified problems that need to be addressed. One is the need for VBA to develop credible strategic business and information resources management plans. VBA has undertaken several initiatives to improve claims processing efficiency and reduce its large backlog of unprocessed claims. But it has done so without an overall business strategy clearly setting forth how it would achieve its goals. Instead, VBA has used stopgap measures to deal with its claims processing problems. While these measures have improved processing times and reduced the claims backlog, VA needs to find other solutions. technology-related projects. It also needs to develop a process to rank and prioritize information technology investments as a consolidated portfolio. A third challenge for VBA is to improve its software development capability. Once agencies have identified their top priority information technology projects, they must be able to determine whether the project should be developed in-house or contracted out. Our review of VBA’s software development capabilities found that, on a scale of software development maturity, VBA was in the “least mature” category. Thus, VBA cannot reliably develop and maintain high-quality software within existing cost and schedule constraints. This, in turn, places VBA’s information technology modernization efforts at significant risk. We made several recommendations to address this issue. These recommendations and VA’s responses follow: Obtain expert advice on developing high-quality software. VBA is working with the Air Force, under an interagency agreement, to implement this recommendation. Develop a plan to achieve a higher level of software development maturity. VBA has developed such a plan and has taken other actions to improve software development maturity. Require that future software development contracts specify that services be obtained from contractors with at least a level 2 (on a scale of 1 to 5, with 5 being the highest level) rating. According to VBA, it plans to award a general software contract with a provision regarding the necessary software development skills. We periodically report to the Congress on options for reducing the budget deficit. Our latest report, issued March 14, 1997, identified a series of potential changes in veterans’ benefits and VA programs that could contribute many billions of dollars toward deficit reduction over the next 5 years. Some of the options involve management improvements that could be achieved by the agency. Others, however, would require fundamental policy changes in veterans’ benefits, including changes in entitlement programs. During 1996, VA paid approximately $1.7 billion in disability compensation payments to veterans with diseases neither caused nor aggravated by military service. In 1996, the Congressional Budget Office (CBO) reported that about 230,000 veterans were receiving about $1.1 billion annually in VA compensation for these diseases. Other countries we contacted do not compensate veterans under such circumstances. If disability compensation payments to veterans with nonservice-connected, disease-related disabilities were eliminated in future cases, 5-year savings could, CBO estimated, exceed $400 million. In fiscal year 1994, VA spent more than $1 billion in educational assistance benefits to more than 450,000 beneficiaries. In addition, it spent over $12 million on contracts with state approving agencies to assess whether schools and training programs offer education of sufficient quality for veterans to receive VA education assistance benefits when attending them. An estimated $10.5 million of the $12 million paid to state approving agencies was spent to conduct assessments that overlapped assessments performed by the Department of Education. CBO estimated that at least $50 million could be saved over the next 5 years if the Congress directed VA to discontinue contracting with state approving agencies to review and approve educational programs at schools that have already been reviewed and certified by Education. State veterans’ homes recover as much as 50 percent of the costs of operating their facilities through charges to veterans receiving services. Similarly, Oregon recovers about 14 percent of the costs of nursing home care provided under its Medicaid program through estate recoveries. In fiscal year 1990, VA recovered less than one-tenth of 1 percent of its costs for providing nursing home care through beneficiary copayments. Potential recoveries appear to be greater within the VA system than under Medicaid. Home ownership is significantly higher among VA hospital users than among Medicaid recipients, and veterans living in VA nursing homes generally contribute less toward the cost of their care than do Medicaid recipients, allowing veterans to build larger estates. billions of dollars could be saved through the increased revenues. For example, if VA recovered 25 percent of its costs of providing nursing home care through a combination of cost sharing and estate recoveries, it would save about $3.4 billion over the next 5 years. VA hospitals too often admit patients whose care could be more efficiently provided in alternative settings, such as outpatient clinics or nursing homes. Our studies and those of VA researchers and the VA Inspector General have found that over 40 percent of VA hospital admissions and days of care were not medically necessary. Private health insurers generally require their policyholders (or their physicians) to obtain authorization from them or their agent prior to admission to a hospital. Failure to obtain such preadmission certification can result in denial of insurance coverage or a reduction in payment. We have recommended that VA establish an independent preadmission certification program. Although VA, in September 1996, required its VISNs to establish a preadmission review program, the review programs are run by the hospitals rather than by external reviewers and do not provide any direct financial incentive for facilities to adhere to the decisions of their reviewers. While the preadmission reviews are likely to have some effect on inappropriate admissions, they may not be effective unless coupled with a financial penalty for noncompliance with review findings. CBO estimated that if VA were to establish precertification procedures similar to those used by private health insurers which, result in a 40-percent reduction in admissions and days of care, VA’s medical care spending could be reduced by $8.4 billion over 5 years. Historically, VA has submitted a budget request for hundreds of millions of dollars in major health care construction projects. The requests have typically included construction or renovation of one or more hospitals. such risk by creating additional uncertainty. In addition, we believe that analyzing alternatives to major construction projects is entirely consistent with VA’s suggested realignment criteria. Delaying funding for major construction projects until the alternatives can be fully analyzed may result in more prudent and economical use of already scarce federal resources. The potential savings of delaying funding for VA hospital construction are uncertain in the absence of an assessment of VA’s needs based on its own realignment criteria. CBO estimates that if the Congress did not approve funding of any major construction projects until after VA has completed its realignment, savings totaling more than $1.2 billion could be achieved over 5 years. VA’s fiscal year 1998 budget submission and its recent decision not to pursue construction of a new VA hospital in East Central Florida are consistent with this option. VA is seeking only $48 million for major medical construction for fiscal year 1998. Although VA took over 50,000 hospital beds out of service between 1970 and 1995, it did not close any hospitals on the basis of declining utilization. With the declining veteran population, new technologies, and VA’s efforts to improve the efficiency of its health care system, significant further declines in demand for VA hospital care are likely. While closing wards saves some money by reducing staffing costs, the cost per patient treated rises because the fixed costs of facility operation are distributed among fewer patients. At some point, closing a hospital and providing care either through another VA hospital or through contracts with community hospitals may become less costly than simply taking beds out of service. Potential savings from hospital closures are difficult to estimate because of uncertainties about which facilities would be closed, the increased costs that would be incurred in providing care through other VA hospitals or contracts with community hospitals, and the disposition of the closed facilities. the system does not need to expend the level of resources that VA has previously estimated to meet the health care needs of veterans. These resources are overstated because VA did not adequately consider the declining demand for VA hospital care in estimating its resource needs and because eligibility for VA care has been reformed—which, according to VA, will allow it to divert 20 percent of its hospital admissions to less costly outpatient settings. Second, VA could reduce its operating costs by billions of dollars over the next 5 years by completing a wide range of efficiency actions. VA recognizes that it can reduce its costs per user by 30 percent over the next 5 years but plans to use the savings to expand its market share by 20 percent. We recently recommended that VA provide the Congress information on the savings achieved through improved efficiency in support of its budget request. We noted that providing the Congress with information on factors, such as inflation and creation of new programs, which increase resource needs, without providing information on changes that could reduce or offset those needs leaves the Congress with little basis for determining appropriate funding levels. VA, however, has been unwilling to provide such information to the Congress. One way for the Congress to respond to VA’s unwillingness to provide information on savings from improved efficiency would be to limit the VA medical care appropriation at the fiscal year 1997 level for the next 5 years. CBO estimates that this would result in almost $9 billion in savings. Recently enacted legislation expands eligibility for VA health benefits to make all veterans eligible for comprehensive inpatient and outpatient services, subject to the availability of resources. The legislation also requires VA to establish a system of enrollment for VA health care benefits and establishes enrollment priorities to be applied, within appropriated resources. The lowest priority for enrollment is veterans with no service-connected disabilities and high enough incomes to place them in the discretionary care category. enrollment system. If the Congress funded the VA health care system to cover only the expected enrollment of veterans in higher priority enrollment categories, such as veterans with service-connected disabilities and veterans without the means to obtain public or private insurance to meet their basic health care needs, CBO estimates that $1.7 billion in budget authority, adjusted for inflation, could be saved over 5 years. VA pharmacies dispense to veterans over 2,000 types of medications and medical supplies that are available over-the-counter (OTC) through local retail outlets. Such products were dispensed more than 15 million times in 1995 at an estimated cost of $165 million. The most frequently dispensed items include aspirin, dietary supplements, and alcohol prep pads. Unlike VA, other public and private health programs cover few, if any, OTC products for their beneficiaries. Our assessment of VA’s operating practices suggests several ways that budget savings could be achieved. First, VA could more narrowly define when to provide OTC products, reducing the number of OTC products available to veterans on an outpatient basis. Second, VA could collect copayments for all OTC products. CBO estimated that these steps could save over $350 million over the next 5 years. Legislation initially enacted in 1990 gave VA access to Internal Revenue Service tax data and Social Security Administration earnings records to help VA verify incomes reported by beneficiaries. Since then, millions of dollars in savings have been achieved in VA’s health and pension programs as a result of VA’s income verification program. Authority for the program will, however, expire on September 30, 1998. Extending the authority could generate over $115 million in savings between fiscal years 1999 and 2002. differences with respect to its compliance with the Paperwork Reduction Act and the Clinger-Cohen Act. VA’s progress in strengthening its management should help it address the multiple challenges facing its health and benefits programs. Under the leadership of the Under Secretary for Health, the VA health care system has made significant progress during the past 2 years in improving both its efficiency and its image. In addition, actions to expand eligibility, make it easier for VA to buy services from and sell services to the private sector, improve access, and reduce waiting times place VA in a better position to compete with private sector providers for declining numbers of veterans. VA and the Congress, however, are faced with difficult choices. Should VA hospitals be opened to veterans’ dependents or other nonveterans as a way of increasing efficiency and preserving the system? What effect would such decisions have on private sector hospitals? To what extent should the government attempt to capture market share from private sector providers? Should the government subsidize its facilities in order to capture market share? Should some of VA’s acute care hospitals be closed, converted to other uses, transferred to states or local communities, or sold to developers? Should VA remain primarily a direct provider of veterans’ health care or become a virtual health care system in which it contracts with private sector providers rather than operating its own facilities? To what extent should the VA system address the unmet needs of uninsured veterans and those with service-connected disabilities? Decisions regarding these and other questions will have far-reaching effects on veterans, taxpayers, veterans facilities and the VA employees working in them, and private providers. Because of the historic inefficiency of the VA system, the changes currently taking place provide many opportunities for the VA health care system to contribute toward deficit reduction while still improving services to current users. Limiting the system to current users, however, could facilitate declines in hospital use and lead ultimately to closure of VA hospitals. exceedingly difficult. VA will have to attract an ever-increasing proportion of the veteran population if it is to keep its acute care hospitals open. VA’s fiscal year 1998 budget submission outlines its strategy for preserving its hospitals: it wants to increase its users by 20 percent in order to make more efficient use of existing VA facilities. The new users VA is targeting generally have other health care options available to them. The cost of maintaining VA’s direct delivery infrastructure limits VA’s ability to ensure similarly situated veterans equal access to VA health care. VA’s interest in providing services to veterans in the discretionary care category at VA hospitals and outpatient clinics is likely to limit its ability to provide services to low-income and service-connected veterans through the use of contract care. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Subcommittee might have. For more information on this testimony, call Jim Linz, Assistant Director, at (202) 512-7110. Greg Whitney also contributed to this statement. VA Health Care: Improving Veterans’ Access Poses Financial and Mission-Related Challenges (GAO/HEHS-97-7, Oct. 25, 1996). VA Health Care: Opportunities for Service Delivery Efficiencies Within Existing Resources (GAO/HEHS-96-121, July 25, 1996). VA Health Care: Challenges for the Future (GAO/T-HEHS-96-172, June 27, 1996). Veterans’ Health Care: Facilities’ Resource Allocations Could Be More Equitable (GAO/HEHS-96-48, Feb. 7, 1996). Vocational Rehabilitation: VA Continues to Place Few Disabled Veterans in Jobs (GAO/HEHS-96-155, Sept. 3, 1996). Veterans’ Benefits: Effective Interaction Needed Within VA to Address Appeals Backlog (GAO/HEHS-95-190, Sept. 27, 1995). Veterans’ Benefits: VA Can Prevent Millions in Compensation and Pension Overpayments (GAO/HEHS-95-88, Apr. 28, 1995). Veterans’ Benefits: Better Assessments Needed to Guide Claims Processing Improvements (GAO/HEHS-95-25, Jan. 13, 1995). Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking (GAO/T-GGD-97-43, Feb. 12, 1997). 1997 High-Risk Series: Information Management and Technology (GAO/HR-97-9, Feb. 1997). Information Technology Management: Agencies Can Improve Performance, Reduce Costs, and Minimize Risks (GAO/AIMD-96-64, Sept. 30, 1996). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed some of the major challenges facing the Department of Veterans Affairs (VA) and some of the options for deficit reduction through changes in VA benefits and programs. GAO noted that: (1) significant improvements have occurred in the efficiency of the VA health care system; (2) VA's new management and Veterans Integrated Service Network (VISN) structure clearly values efficiency and customer service; (3) in addition, legislation was enacted: (a) expanding eligibility for VA health care; (b) making it easier for VA to contract for and sell services to the private sector; and (c) requiring VA to develop a plan for more equitably allocating resources to its VISNs; (4) these decisions bring with them both solutions to old problems and significant new challenges, such as developing an enrollment process consistent with the priorities established under the eligibility reform legislation and determining when to buy services from the private sector rather than provide them in VA facilities; (5) the Veterans Benefits Administration also faces major challenges; for example: (a) the disability rating schedule has not been updated for over 45 years; (b) VA faces the prospect of late or inaccurate compensation and pension payments to millions of veterans if it is unable to resolve the "year 2000" computer problem; (c) veterans often wait over 2 years for resolution of compensation and pension claims by the time the appeals process has been completed; and (d) VA could avoid millions of dollars in overpayments of compensation and pension benefits by strengthening its ability to prevent such payments; (6) recent legislation, including the Government Performance and Results Act, the Chief Financial Officers (CFO) Act, and the Paperwork Reduction Act, provides a basis for addressing long-standing management challenges; (7) VA has begun to use the legislation to improve its mission performance and results, its financial reporting, and its information resources management; (8) for example, VA included strategic plans for its health and benefits programs in its fiscal year 1998 budget submission, and it has been preparing audited financial statements since 1986, well in advance of the requirements imposed by the CFO Act; (9) multiple options exist for supporting deficit reduction through changes in VA benefits and programs; (10) although some of the changes could be achieved through administrative action, others would require legislation; and (11) the options include: (a) redefining compensation benefits to eliminate compensation for diseases that are not related to military service; (b) imposing higher cost sharing for nursing home and other long-term care services; (c) limiting enrollment in the VA health care system; and (d) closing underused hospitals.
PRWORA amended the authorizing language in statutes governing SSI, TANF, Food Stamp, and housing assistance programs by prohibiting fugitive felons and probation and parole violators from receiving benefits under these programs. To assist law enforcement agencies in apprehending fugitive felons, PRWORA also amended the authorizing legislation for each of these programs to require program officials to disclose the information they maintain on individuals to law enforcement officers when they request it. Oversight and administration of the SSI, TANF, Food Stamp, and federal housing assistance programs are the responsibility, respectively, of four federal agencies: SSA, HHS, USDA, and HUD. Through the SSI program, SSA oversees the provision of monthly cash payments to people who are blind, disabled, or age 65 or older and have limited income and resources. HHS oversees the TANF program, which provides cash assistance and other work-related services to needy individuals. USDA oversees the Food Stamp Program, which helps low-income individuals purchase food. HUD provides housing assistance to low-income families, including the elderly and persons with disabilities. PRWORA’s fugitive felon provisions apply to eligibility for HUD’s public housing program and to most Section 8 programs. The public housing program provides housing units whose operation, maintenance, and modernization is subsidized with federal funds. Section 8 programs provide needy families with rental assistance through vouchers that can be used in privately owned housing or by occupying government-subsidized housing units. While SSA directly administers the SSI program nationwide, the Food Stamp and TANF programs are generally administered at the state or local level, albeit with federal money in the case of the Food Stamp Program, and a combination of federal and state funds in the case of TANF. Depending on the state, the same staff at local offices may determine eligibility and benefit levels for both Food Stamp and TANF programs. HUD relies on local public housing agencies to administer its public housing and Section 8 programs. Public housing agencies manage and operate local public housing units and enforce tenant compliance with the lease. Low-income individuals and families often participate in more than one of the above public programs. The federal agencies that oversee these programs work with their OIG to meet their responsibility to ensure program integrity. In addition to helping to identify fraud, waste, and abuse in these programs, the OIGs participate in federal, state, and local law enforcement agencies’ pursuit and apprehension of individuals wanted for criminal offenses, including felonies. State governments can also play a role in ensuring the integrity of federal programs, particularly TANF and Food Stamp because states or counties administer them. The governments do so through fraud units in state human services departments and state inspectors or auditors general. To illustrate the size of these programs and their potential for fraud, waste, and abuse, table 1 compares the total benefits paid, erroneous payments, and caseload size reported by federal agencies in a single year by program. Under PRWORA, the focus of the language affecting fugitive felons in each programs’ authorizing language differs somewhat. The use of federal funds by fugitive felons is specifically prohibited in the SSI, TANF, and Food Stamp programs. For the SSI and Food Stamp programs, individuals identified as fugitive felons are ineligible for benefits for any period in which they are considered to be a fugitive felon, or probation or parole violator. For the TANF program, states are prohibited from using any portion of their federal funding to assist any individual considered to be a fugitive felon. For public housing and Section 8 programs, PRWORA states that fugitive felon and probation or parole violation status “…shall be cause for immediate termination of the tenancy….” PRWORA’s provisions also require the SSI, Food Stamp, and housing assistance programs to disclose information about felons to law enforcement. Upon request from any federal, state, or local law enforcement officer, program officials must furnish the current address, Social Security number (SSN), and photograph (if applicable) of any benefit recipient. The officer must furnish the name of the recipient, and other identifying information to establish the unique identity of the recipient, and also attest that the request is made in conjunction with the officer’s official duties. For the TANF program, the law also states that no “safeguards” a state has established “...against the…disclosure of information about applicants or recipients….” are to prevent the program from furnishing this information to law enforcement officers. Appendix II provides the language by program of PRWORA’s provisions in the authorizing legislation with regard to the eligibility of fugitive felons and probation and parole violators for benefits and the release of recipient information to law enforcement officers. It is difficult to estimate the number of fugitive felons who could be receiving SSI, TANF, Food Stamp, or housing assistance benefits, or the amount of erroneous payments made to such individuals, because there is no comprehensive data on the total number of people, nationwide, for whom there are outstanding arrest warrants for felonies or probation or parole violations. The Federal Bureau of Investigation (FBI) does compile arrest warrant data from a number of sources, nationwide, in its National Crime Investigation Center (NCIC) database. The NCIC is a repository for arrest warrants that federal agencies and state and local law enforcement authorities submit to it on a voluntary basis. According to an FBI official, there were about 825,000 outstanding warrants for felonies, serious misdemeanors, and parole and probation violations filed in the NCIC database as of August 2002. NCIC does not report the total number of warrants for felonies alone, nor does it know if all the warrants in the database are outstanding. There is some data available on the number of adults in the United States on probation or parole. According to the Department of Justice, there were 3,839,500 on probation in December 2000 and another 725,500 on parole. The extent to which PRWORA’s fugitive felon provisions have been implemented in SSI, TANF, Food Stamp, and housing assistance programs varies. To help ensure that fugitive felons do not receive benefits for which they are ineligible, during the application and recertification processes, most programs ask applicants about their fugitive felon status. A number of programs also match recipient files and arrest warrant data to identify and terminate benefits to fugitive felons who are already on the rolls, but the scope and frequency of such matching activity varies widely. All programs also are responding in some way to PRWORA’s requirement that they make information on recipients available to law enforcement when requested. In the Food Stamp and SSI programs, federal OIGs play a critical role in providing law enforcement agencies with this information directly assisting them at times in the apprehension of those fugitive felons on the rolls. HUD and HHS, on the other hand, have done little to ensure their programs’ disclosure of information from recipient records to law enforcement agencies. Officials in 49 state TANF and 47 state Food Stamp programs reported that they require applicants to answer questions about whether there is a warrant outstanding for their arrest, although there is little evidence available on the degree to which this practice deters fugitive felons from applying for benefits or the number of applicants identified as fugitive felons in this way. In 39 TANF and 43 Food Stamp programs, recipients are also required to respond to these questions when their continuing eligibility is assessed. Two state TANF and 4 state Food Stamp programs reported doing nothing to determine fugitive felon status at the time an individual applies for benefits. In the Food Stamp and SSI programs, procedures require staff to ask individuals when they apply for benefits whether or not they are fugitive felons. Staff also must ask recipients about their fugitive felon status when their continuing eligibility for benefits is reassessed. At the completion of the application interview, applicants also are asked to sign the application form, which contains a statement certifying that the information they provide is true and that they understand that any misrepresentation of the truth may constitute a crime. In housing assistance programs, according to HUD officials, PRWORA’s provisions do not make fugitive felons who apply for assistance ineligible. PRWORA states only that fugitive felon status shall be cause for immediate termination of tenancy. To implement this provision, HUD’s regulations state that the lease must provide that the public housing authority may terminate the tenancy during the term of the lease if a tenant is a fugitive felon. Although HUD lacks requirements for systematic methods to prevent fugitive felons from successfully applying for assistance, the agency’s regulations do state that housing agencies have authority to screen out applicants they determine to be unsuitable for admission under public housing authority standards. Officials from HUD, however, indicated that they were not aware of which, if any, housing agencies engage in screening for fugitive felons. In addition to the practice of asking applicants to report their fugitive felon status when they apply for benefits, SSI and some state Food Stamp and TANF programs identify and remove fugitive felons already on the rolls by comparing entire recipient files with a law enforcement agency’s arrest warrant data. The scope and frequency of computerized matching can vary. SSI and some state Food Stamp and TANF programs use databases containing arrest warrants city or countywide, statewide, or nationwide. The FBI’s NCIC database, for example, is a repository of arrest warrant information the FBI receives on a voluntary basis from states and from local jurisdictions. In 2002, the FBI expressed its willingness to compare state Food Stamp and TANF recipient files with warrants in the NCIC database and initiated an information campaign regarding its service. Fifteen state Food Stamp and 15 state TANF programs reported that they periodically matched their recipient files with arrest warrant data. Most have done these matches on an ongoing basis—in most cases monthly, but the scope of the matches varied by state and program. (See table 2.) SSA currently matches its nationwide SSI applicant and recipient file with arrest warrant data from the FBI’s NCIC database, the U.S. Marshals Service, and 11 states, usually on a monthly basis. SSA’s current matching process, which covers all SSI applicants and recipients nationwide, has evolved from attempts by its OIG to verify the fugitive felon status of SSI recipients on an ad hoc basis in certain geographic areas. SSA officials said that systematic computer matching of the SSI file and arrest warrant data, although a complex process requiring considerable resources, is the most efficient and comprehensive way to ensure that fugitive felons do not receive SSI benefits. See appendix III for a detailed description of SSA’s monthly matching process. HUD officials said that the agency has never matched its nationwide database of housing assistance recipients with arrest warrant data from any law enforcement agency to identify program participants who are fugitive felons, even though PRWORA makes fugitive felon status grounds for termination of tenancy. The agency has left the implementation of this provision up to the individual public housing agencies, leaving it to them to identify fugitive felons. HUD regulations give public housing agencies and other landlords the option to evict those identified as fugitive felons, but do not require them to do so. However, HUD has not determined the extent to which public housing agencies have implemented the fugitive felon provisions in accordance with its regulations. In addition to its regulations, in July 2002, HUD issued revised model lease language stating that the tenancy of fugitive felons may be terminated. To help law enforcement officials apprehend fugitive felons, PRWORA calls for programs to provide information from SSI, Food Stamp, TANF, and housing assistance recipient records to law enforcement officers under certain circumstances. Law enforcement agencies can request such information directly from program staff, or indirectly from federal OIGs. In the case of state Food Stamp and TANF programs, they can request information from program officials, fraud units in state program departments, or from auditors general in state government. In the SSI and Food Stamp programs, there are written procedures that must be followed when responding to these requests. USDA has issued regulations for state Food Stamp programs that mirror the PRWORA provisions and require states to provide household addresses, SSNs, and photographs, if available, to law enforcement officials when they make a request, and 44 state Food Stamp programs reported also having their own written guidelines. In the absence of any guidance from HHS, officials from 44 state TANF programs reported that their programs had their own written guidelines for what program staff and others should do, under PRWORA, when a request for information from recipient records is received from law enforcement officers. Both SSA and its OIG have comprehensive procedural guidelines for handling law enforcement requests for information about SSI recipients. These guidelines include specific instructions on confirming the identity of the person named on an arrest warrant before information from recipient records can be released. HUD, like HHS, has not issued guidance, thus they have left it up to public housing agencies and other landlords to determine whether and under what circumstances they will respond to such requests from law enforcement officers. HUD has no information on how public housing agencies are handling such requests. Because of their own law enforcement authority, federal OIGs and state fraud units have often acted as intermediaries between outside law enforcement agencies and program staff to facilitate the exchange of recipient and arrest warrant information. Federal OIGs and state auditors have also played a major role in assisting in law enforcement’s apprehension of fugitive felons on program rolls. Both USDA’s and HHS’s OIG have conducted such efforts. In both cases, the OIG matched recipient files and arrest warrant data and provided law enforcement officials with the matches. USDA’s efforts, known as Operation Talon, began in 1997 in Kentucky, and expanded to include operations in 30 states and D.C. According to USDA’s Inspector General, Operation Talon activities slowed beginning in 2001 when OIG priorities shifted to other areas. HHS’s OIG conducted one initiative in a metropolitan area in Nebraska that ended in the apprehension of a very small number of fugitive felons. SSA’s OIG has also played a key role in the exchange of arrest warrant and SSI recipient information. SSA’s monthly matching process is not only designed to identify and terminate benefits to fugitive felons on the SSI rolls, but also to provide law enforcement agencies with information from SSI recipients’ records that will assist them in apprehending fugitive felons. SSA and its OIG worked together to negotiate agreements with federal, state, and local law enforcement agencies to provide them with arrest warrant information. Once SSA staff match arrest warrants with the SSI file, the OIG helps ensure that the law enforcement agencies that issued the warrants receive the information from SSI records they need to apprehend those identified as fugitive felons. Finally, the OIG tracks and compiles data on the apprehension and termination of benefits to SSI recipients determined to be fugitive felons. See appendix III for a detailed description of SSA’s monthly matching process. In addition, SSA’s OIG has assisted law enforcement officials in their pursuit and arrest of SSI recipients who are fugitive felons. For example, the OIG has participated in joint investigations with the New York State Division of Parole, the New York State Welfare Inspector General, and the New York City Department of Corrections to identify and apprehend SSI recipients who were parole violators. To assist law enforcement in locating criminals, HUD’s OIG indicated that it has responded to requests from law enforcement for information, usually on a case-by-case basis. As with SSA’s OIG, HUD’s and USDA’s OIGs also have assisted law enforcement agencies in the pursuit and arrest of fugitive felons. The SSI, Food Stamp, and TANF programs have identified over 110,000 beneficiaries who are fugitive felons– largely through the matching of warrant and enrollee files. When SSA and states have taken the initiative or been in a position to match recipient and warrant data, there have been a significant number of fugitive felons identified. The results from our own comparison of arrest warrant from just two states and the HUD nationwide tenant database suggest that many fugitives in housing assistance programs go unidentified. Total cost savings to date are hard to ascertain, however, in part, because not all state Food Stamp and TANF programs keep records or make such calculations. SSA’s OIG has reported 5-year cumulative savings of more than $81 million in overpayments and $133 million in projected future savings from the SSI recipients identified as fugitive felons. Since passage of PRWORA, law enforcement officials also have been making thousands of requests for program information to help them apprehend fugitive felons, either for specific case information or for file matches. At least 18,678 arrests from this effort have occurred. This may be a conservative figure because programs do not always receive feedback from law enforcement on the outcome of cases. Of the more than 110,000 beneficiaries identified as fugitive felons across all programs since 1996, 45,000 have been identified on the SSI rolls through matching. SSA’s experience strongly suggests that matching SSI and arrest warrant data produces better results than less comprehensive methods of identifying fugitive felons such as case-by-case inquiries. Also, the more comprehensive the warrant database drawn from, the higher the yield. SSA’s access to the NCIC database together with its array of agreements with states and municipalities has given the agency a large database of warrants against which to compare its own national recipient file. Although SSA efforts to identify fugitive felons on the rolls increased steadily prior to 2000, the agency’s identification process up until that time was largely confined to manual checks for specific case inquiries and one- time ‘sweeps’ initiated by law enforcement, the OIG, or state fraud units. In 2000, however, after SSA secured a memorandum of understanding with the FBI to provide it with arrest warrant data, SSA began performing monthly matches with its SSI file. This more comprehensive approach resulted in some sharp yearly increases in the numbers of fugitive felons identified. (See table 3.) Increased savings accompanied the increases in the number of fugitive felons identified. SSA estimates that since the passage of PRWORA, it has identified more than $81 million in overpayments. It has projected future savings on fugitive felons removed from the SSI rolls of about $133 million. Table 3 shows substantial annual increases in those savings beginning in 1999 with SSA’s initial computer matching efforts. State Food Stamp and TANF administrators have identified many fugitive felons on their program rolls, as well. Based on the estimates we received from state officials, in total, over 65,000 have been identified in these programs across all states since 1997. Identification strategies and results varied considerably across states, but those state programs that identified large numbers of felons usually did so by matching their automated recipient files and files of arrest warrants nationwide and/or statewide.For example, sizeable numbers of fugitive felons on TANF and Food Stamp rolls in Missouri, Ohio, and Tennessee were identified using large warrant databases. (See table 4.) Very few state TANF or Food Stamp programs could provide precise estimates of cost savings, but the initiatives in Missouri, Ohio, and Tennessee found that millions of dollars could be saved using matching at the state-level. Because HUD has no data on the numbers of housing assistance recipients whose tenancy could be subject to termination under PRWORA, we conducted our own match of arrest warrants in two states—Ohio and Tennessee—with HUD’s national, tenant database. Nationally, we found 927 adults who were living in public housing or Section 8 housing between January 2001 and January 2002 while there were outstanding warrants for their arrests from those two states for felonies, or parole or probation violations. These adults were wanted for a wide range of crimes, including possession of or selling dangerous drugs; larceny, such as bank robbery; and assault. (See app. V.) About 65 percent were living in public housing or Section 8 housing outside of Ohio and Tennessee. If the leases on the housing units where these fugitive felons lived or the Section 8 voucher had been terminated, we estimate HUD could have saved $4.2 million annually in program costs – or made housing available to some of the 9 million eligible families waiting for units nationwide. See appendix I for a detailed description of our matching process. Law enforcement officers have requested information about thousands of recipients since PRWROA gave them access to program information. With the exception of SSA and a few state human service agencies, however, most do not track or record requests. Collectively, human service agencies in Arizona, Indiana, and Texas reported receiving 29,408 requests from law enforcement for recipient information from fiscal years 1997 through 2001. SSA’s OIG was able to report the number of requests it has received each year from law enforcement since implementation of PRWORA. (See fig 1.) Although requests increased during the first few years after the law was passed, there was a noticeable decrease in 2000 after the agency began to implement routine file matching. According to SSA officials, the recipient information SSA now routinely provides to law enforcement agencies as part of its monthly matching process most likely reduced the need for law enforcement agencies to request this information. Since PRWORA was enacted, the information that law enforcement has received from programs about their recipients has resulted in the arrest of at least 18,678 fugitive felons through March 2002, a conservative figure in that law enforcement does not always report arrests to programs, state auditors, or OIGs. SSA, USDA, and some state TANF and Food Stamp officials were able to provide selected statistics. USDA’s Operation Talon—carried out in 30 states and D.C.—resulted in 5,165 arrests for felonies and probation and parole violations between early 1997 and March 2002. SSA reported 5,019 fugitive felons apprehended through 2001. Among states that have screened both TANF and Food Stamp recipients, two were able to report total arrests since 1997: Texas reported 791; New York, 6,980. Finally, using matching, a state audit of Food Stamp and TANF programs in Tennessee in 1998 resulted in 403 arrests for offenses as serious as Law enforcement officials do not arrest all recipients who are identified as fugitive felons. Most fugitive felons found on the SSI rolls, for example, were not arrested. SSA, which has a follow-up reporting system in place, received reports on over 5,900 matched warrants from law enforcement agencies in 2001, indicating that over 2,000 fugitive felons had been arrested. According to SSA, there are many reasons why providing law enforcement with information from recipients identified as fugitive felons may not lead to their arrest: (1) The fugitives could not be located; (2) the recipient was not the person for whom the warrant was issued; and (3) the law enforcement agency that issued the warrant refused to extradite the recipient when he or she was living in another jurisdiction. There are a number of reasons why the law is not aggressively implemented by all programs that are required to deny benefits to fugitive felons. First, centralized arrest warrant databases are not readily available to programs. Second, programs need the assistance of law enforcement agencies to achieve their goal of removing fugitive felons from the rolls. Third, a lack of information about how to conduct computerized matching and where to find assistance hampers many state program officials. Finally, in the Food Stamp and TANF programs, the lack of criteria for what constitutes a fugitive may interfere with states’ ability to act decisively to deny benefits to those wanted for felonies or probation or parole violations. There is no single database that contains information on all wanted persons throughout the United States. Local law enforcement agencies, states, judicial agencies, and federal government agencies all maintain various warrant records. The only database that compiles federal, state, and local warrants is the FBI’s National Crime Information Center, or NCIC, established to support criminal justice agencies throughout the country. Yet, according to SSA’s OIG, in August 2000 this file contained only about 30 percent of local and state warrants issued across the nation. For this reason, SSA has found it necessary to negotiate with many states and even municipalities for access to their local warrants. Among the 75 state-administered Food Stamp and TANF programs that do not perform computer matching to identify fugitive felons, nearly half cited the lack of a statewide repository for warrants as a reason. For the Food Stamp program, alone, 16 states reported lacking a central source for statewide or interstate warrant data. Five reported that existing warrant data are outdated and unreliable. Because computer matching involves using criminal records to which only law enforcement agents are authorized access, program officials must negotiate specific terms with each relevant agency for access to that information. Nearly half of the TANF and Food Stamp programs that have not conducted computer matching noted that the burden and complexity of negotiating for access to statewide or NCIC arrest warrant records was a reason why they did not match. In addition to accessing warrant files, law enforcement’s assistance is needed to verify the accuracy of the warrant once a match it made. Officials from USDA’s FNS pointed out that such verification is critical to ensuring that Food Stamp benefits are not improperly denied to otherwise eligible individual. Consequently, they are dependent on law enforcement agencies to determine if warrants are valid. Over a third of state officials from TANF and Food Stamp programs that have not matched indicated that they did not because law enforcement agencies are unwilling to verify the results. State program officials do not necessarily have knowledge about how to design a file matching process, how to enlist the help and cooperation of law enforcement agencies, or where to find centralized sources of warrant data. Nearly two-thirds of the TANF and Food Stamp officials surveyed whose programs do not use computerized matching said that information about file matching from HHS or USDA would help them assess its feasibility. Over half of these program officials said that information about how matching is performed by other programs or guidance testing its feasibility would be a moderate to very great help. About half indicated that guidance on federal laws governing either access to arrest warrants, or due process prior to terminating benefits under computerized matching would be a moderate to very great help. Many survey respondents also indicated that information about the rules governing use of and access to the NCIC arrest warrant database would be helpful. There is also evidence that a fugitive felon is defined differently across, and perhaps within, programs. FNS, for example, has directed state Food Stamp programs to deny benefits to individuals with outstanding arrest warrants only when the program has verified that these individuals are aware of the warrants. In contrast, SSA guidelines make no mention of this as criteria for denying SSI benefits. They state that SSI benefits should be denied to applicants and recipients with outstanding arrest warrants, whether or not the law enforcement agencies that issued the warrants have acted on them. FNS headquarters officials told us that they believed the definition of a fugitive under PRWORA is open to interpretation and that, without further guidance, state Food Stamp programs may be defining it differently as well. They pointed out a number of questions that need to be addressed when deciding whether or not to deny benefits to those with arrest warrants. For example, should individuals with outstanding warrants be considered fugitive felons if they are not aware of the warrants, or if law enforcement agencies have not acted on the warrants within a certain timeframe? HHS officials also indicated that, based on the language in the law, it is not clear whether the Congress intended to deny benefits when law enforcement officials lack the resources, jail space, or court time to execute arrest warrants. They contend that the most significant challenge programs face when implementing the law is how to define a “fleeing” felon. Responses to our survey confirm that there may be different definitions of what constitutes a fugitive felon across state programs as well. For example, one state program official indicated that the existence of a warrant, alone, was not proof that a recipient was fleeing. Consequently, recipients in that state who were fugitive felons were reportedly not removed from the benefit rolls unless they were arrested. An official from another state further questioned whether those who are not aware of warrants for their arrest could be considered fleeing. There has been some progress in implementing the fugitive felon provisions of PRWORA. Where there has been leadership from program officials, OIGs or state auditors, large numbers of fugitive felons have been removed from the rolls and apprehended, mostly through the use of matching. However, the law has not been implemented aggressively in all programs. Most strikingly, HUD has done little to ensure that fugitive felons do not receive housing assistance. This could be because the law, as it applies to housing assistance programs, states that fugitive felon status is only grounds for termination of tenancy and not that fugitive felons are ineligible for housing assistance. Therefore, according to HUD officials, while public housing agencies and landlords have the authority to evict fugitive felons, they are not required to do so. Furthermore, even though HUD maintains its own national database of tenants, it has made no attempt to match it with information from centralized arrest warrant databases such as the NCIC. Such matching, even when done on a limited basis, would be an effective way to identify potentially large numbers of fugitive felons in federal housing assistance programs that landlords have the authority to evict. As demonstrated in SSI and a few state TANF and Food Stamp programs, matching recipient and arrest warrant data can be an effective tool for implementing the fugitive felon provisions. However, expanding the use of matching is not an easy task. In this respect, state program officials indicated that they could benefit from the experience of others. Information about how other programs and states have obtained access to and used centralized arrest warrant data, collaborated with law enforcement agencies, and conducted matching could help state TANF and Food Stamp programs plan and develop their own matching procedures. Increased use of computer matching alone will not ensure that the law is fully implemented in state TANF and Food Stamp programs. The law’s overall effectiveness could be seriously undermined if it is not applied consistently, and there appears to be some question about what constitutes a fugitive felon within TANF and Food Stamp programs. Without knowing how state programs are defining fugitive felons, there is no way for HHS or USDA to determine if the law is being applied consistently and, if not, how to ensure that it is. Finally, neither HHS nor HUD has issued instructions on the circumstances under which information about benefit recipients is to be released to law enforcement agencies when they request it. As a result, state TANF programs, and HUD public housing agencies and other landlords in this highly decentralized system, may be implementing this provision inconsistently. As a result, law enforcement agencies may not be receiving the information they request and legally have access to. The Congress should consider amending the Housing Act of 1937 to make fugitive felons ineligible for federal housing assistance. To better implement PRWORA’s fugitive felon provisions, we recommend that the Secretary of Housing and Urban Development test the feasibility and effectiveness of routinely matching its nationwide tenant file with the NCIC arrest warrant database as a means of identifying tenants in housing assistance programs nationwide who are fugitive felons and subject to eviction and issue guidance on the circumstances under which federal housing programs are required to provide information about residents in federally subsidized housing to law enforcement agencies. To oversee and better implement these provisions in the TANF program, we recommend that the Secretary of Health and Human Services encourage states to test the feasibility and effectiveness of routinely matching TANF applicant and recipient records with arrest warrants by providing them with information on the matching activities of other state TANF programs and their results and on accessing available arrest warrant databases such as NCIC; monitor states’ computerized matching efforts to identify fugitive felons and their results; determine what criteria state TANF programs are using to remove recipients wanted for felonies or probation or parole violations from the TANF rolls, and if these criteria differ across states, provide TANF programs with clear guidance on the circumstances under which benefits to fugitive felons should be terminated; and issue guidance on the circumstances under which TANF programs are required to provide information about TANF recipients to law enforcement agencies. To oversee and ensure better implement these provisions in the Food Stamp program, we recommend that the Secretary of Agriculture encourage states to test the feasibility and effectiveness of routinely matching Food Stamp applicant and recipient records with arrest warrants by providing them with information on the matching activities of other state Food Stamp programs and their results and on accessing available arrest warrant databases such as NCIC; monitor states’ computerized matching efforts to identify fugitive felons and their results; and determine what criteria state Food Stamp programs are using to remove recipients wanted for felonies, or probation or parole violations from the Food Stamp rolls and, if these criteria differ across states, provide Food Stamp programs with clear guidance on the circumstances under which benefits to fugitive felons should be terminated. Officials from the Department of Agriculture’s Food and Nutrition Service, the Department of Health and Human Services’ Administration for Children and Families, and the Department of Housing and Urban Development provided comments on our report. The full text of the comments from HHS and HUD, appears in appendixes VI and VII, respectively. The Director of FNS Program Development Division and other FNS headquarters officials, provided oral comments, some of which were technical and incorporated into the report where appropriate. Officials from the Social Security Administration reviewed the report and had no comments. (See app. VIII.) In general, FNS officials agreed with our recommendations, but voiced concern that the report needed to more fully discuss the legal and procedural complexities involved in implementing the fugitive felon provisions. They said that because what constitutes a fugitive under PRWORA is open to interpretation, eligibility criteria for benefits may vary within and across programs. They indicated their intention to re-evaluate their current definition, which defines fugitives as only those who are aware that warrants have been issued for their arrest. Nevertheless, they noted that there may be room for flexibility in how state agencies respond to the individual circumstances of fugitive felons, just as states have flexibility when enforcing certain other Food Stamp program requirements. FNS officials cautioned against viewing the implementation of PRWORA’s fugitive felon provisions outside the context of a program’s administrative structure and its quality assurance procedures and standards. They noted, for example, that the Food Stamp program is administered by the states, making it more difficult to ensure uniform implementation of the law than in the SSI program, where control is centralized at the federal level. FNS officials noted that, in order to enhance compliance with the law in the Food Stamp program, they recently obtained information from SSA about its fugitive felon procedures and processes and were now working with their OIG to clarify issues related to fugitive felon ineligibility for food stamps. They have also begun to plan pilot projects in several states of procedures that would allow law enforcement time to apprehend fugitive felons before state food stamp agencies take action on their eligibility. These projects will build due process protections into these procedures. ACF acknowledged that the report provided new and useful information on this topic. However, it expressed concern that the report did not adequately portray the full significance of the challenges associated with implementation of the fugitive felon provisions. In particular, ACF highlighted the challenges associated with defining a fleeing felon. Furthermore, ACF did not concur with our recommendations for a variety of reasons. It believed that implementing our recommendations would infringe on the authority and flexibility PRWORA gives states to establish their own TANF eligibility rules and procedures and calls for federal action not authorized under PRWORA. We believe that issuing guidance is permissible under PRWORA and can be done in a manner that allows for state flexibility. Guidance simply provides states with a means to make more reasoned judgments about the actions they choose to take. ACF further argued that neither the Office of management and Budget’s (OMB) single state compliance audits, nor feedback it has received from state agencies have identified problems with the implementation of the fugitive felon provisions. The absence of evidence of problems in either case, however, does not mean that problems do not exist, nor that the monitoring and guidance we recommend would not be useful. ACF also appears to have interpreted our recommendations for monitoring and encouraging computer matching more narrowly than intended. We do not prescribe how ACF should accomplish these tasks. Our recommendations would not require the creation of new data collection procedures or systems. The mechanisms that ACF and HHS’s Office of Family Assistance have in place for communicating and interacting with state and local agencies could be used to effectively implement these recommendations within the limits PRWORA places on federal authority. ACF disagreed with the recommendation for national guidance describing the circumstances, under PRWORA, in which TANF programs should provide information to law enforcement agencies, noting that 44 state TANF agencies had already developed their own guidance. Our point remains that all state agencies should have established guidance. Furthermore, our review of selected states’ guidance showed that it may not always be consistent with PRWORA’s fugitive felon provisions. Consequently, we continue to believe that national guidance is justified. In its comments, HUD did not concur with our recommendation to test the feasibility and effectiveness of routinely matching its nationwide tenant file with NCIC arrest warrant data. HUD said that PRWORA does not require, nor does it give, the department the authority to conduct computer matching to screen for fugitive felons, and parole or probation violators. We disagree with HUD’s assertion that it lacks the authority to conduct computer matching. In our view, HUD does not need any specific statutory authority to conduct computer matching, but any matching it does conduct must comply with the Computer Matching and Privacy Protection Act of 1988. We also note that, even though HUD contended that it lacked the authority to computer match, it did agree to examine computer matching as a possible option for implementing the fugitive felon provisions. HUD did concur with our recommendation to issue guidance on the circumstances under which federal housing programs are required to provide information about residents to law enforcement agencies. HUD said that its Office of General Counsel was working with its Office of Public Housing and Multifamily Housing to determine the appropriate method (by notice or regulation) to implement PRWORA’s requirement that public housing agencies provide law enforcement with information about tenants who are fugitive felons, or parole or probation violators. HUD also described other actions it was taking to better implement PRWORA’s fugitive felon provisions. Finally, HUD expressed concern about our estimate of the savings that could result from matching its national tenant file with warrants from Ohio and Tennessee. HUD indicated that the assumptions upon which this estimate is based, are not likely to hold true in every case. We agree and had already recognized this qualification in the draft report. We continue to believe that our analysis produced a reasonable estimate that has been appropriately qualified. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to appropriate congressional committees and other interested parties. Copies will also be made available to others upon request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-9889. Other contacts and staff acknowledgments are listed in appendix IX. In order to determine the extent to which actions have been taken to ensure that fugitive felons do not receive Supplemental Security Income (SSI), Food Stamp, Temporary Assistance to Needy Families (TANF), or housing assistance benefits, we interviewed federal officials in the Social Security Administration (SSA), the Department of Agriculture (USDA), the Department of Health and Human Services (HHS), and the Department of Housing and Urban Development (HUD) and reviewed regulations and other documents that provide policy on handling fugitive felons as applicants or beneficiaries. We also gathered data from the Office of the Inspector General (OIG) for SSA, USDA, HUD, and HHS on the number of program participants identified as fugitive felons through their initiatives. We conducted telephone and e-mail surveys with state officials who administer TANF and Food Stamp programs in each of the states and the District of Columbia. In our telephone survey, we collected data on the actions these programs had taken to implement the fugitive felon provisions in the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996. We also collected information on the extent to which law enforcement agencies have asked for or used beneficiary information from these programs to locate and apprehend fugitive felons and asked states for any estimates they had of the number of fugitive felons identified on the rolls and the amount of overpayments to these recipients. In our e-mail survey, we asked state programs that were not conducting matching what prevented them from doing so and what federal agencies could do to assist them in using matching to implement the PRWORA’s fugitive felon provisions. We obtained information about SSA’s process for identifying fugitive felons through interviews with and documents provided by its OIG, systems, and program staff involved with the fugitive felon program. Appendix III contains a detailed description of SSA’s fugitive felon matching process. We visited state officials in Delaware to gather information about their initiatives in the Food Stamp and TANF programs and issues surrounding their efforts to establishing a data matching process. To determine the extent to which fugitive felons may be receiving federal housing benefits, we conducted a one-time computer match using HUD’s national data on housing tenants and arrest warrant data from Ohio and Tennessee. We chose these two states because their warrant files were readily available from SSA and because of their prior experience in matching warrant records with SSI, Food Stamps, and TANF recipient files. In addition, both states agreed to participate in our pilot. For computer matching, HUD provided us with national information from its database containing persons in public housing, persons receiving Section 8 tenant based assistance, and persons in the Section 8 Moderate Rehabilitation program for the period January 2001 through January 2002.The Ohio and Tennessee warrant records were all current as of January 2002. SSA screened the warrant records for us through its automated Enumeration Verification System to verify the Social Security number (SSN) and other pertinent data provided by the law enforcement agency on the warrant matched with data from SSA’s records. Using SSA guidelines, we screened the approximate 7.3 million tenant records and deleted records with invalid SSNs, that is numbers that SSA never issued. Also, if we found multiple persons using the same SSN, we removed their records. After the screening process, we computer-matched the SSNs of 7.1 million tenant records nationally with the numbers of 31,493 persons with fugitive felon warrants issued by Ohio and Tennessee. Our SSN comparisons identified 927 individual fugitive felons with one or more arrest warrants issued by Ohio or Tennessee who resided in public housing or Section 8 housing. As a final check, we compared the matched individual’s name and date of birth from the warrant and tenant records and found minor inconsistencies which, in our opinion, would not change the results. To estimate cost savings, we analyzed agency data collected for a 1998 HUD report. The report showed data by state on the average annual per unit cost for each housing type. Using these data, we multiplied the number of resident fugitive felons by their applicable average annual per unit cost. If the leases on the housing units in which these 927 individuals lived had been terminated, we estimate that about $4.2 million annually in federal funds could have been made available to others who are eligible for, but could not find, public housing or Section 8 housing. In general. – A State to which a grant is made under section 403 of the Social Security Act shall not use any part of the grant to provide assistance to any individual who is (i) fleeing to avoid prosecution , or custody or confinement after conviction, under the laws of the place from which the individual flees, for a crime, or an attempt to commit a crime, which is a felony under the laws of the place from which the individual flees, or which, in the case of the State of New Jersey, is a high misdemeanor under the laws of such State; or (ii) violating a condition of probation or parole imposed under Federal or State law. The preceding sentence shall not apply with respect to conduct of an individual, for any month beginning after the President of the United States grants a pardon with respect to the conduct. (k) Disqualification of Fleeing Felons — No member of a household who is otherwise eligible to participate in the food stamp program shall be eligible to participate in the program as a member of that or any other household during any period during which the individual is (1) fleeing to avoid prosecution, or custody or confinement after conviction, under the law of the place from which the individual is fleeing, for a crime, or attempt to commit a crime, that is a felony under the law of the place from which the individual is fleeing or that, in the case of New Jersey, is a high misdemeanor under the law of New Jersey; or (2) violating a condition of probation or parole imposed under a Federal or State law. It shall be cause for immediate termination of the tenancy of a tenant if such tenant (A) is fleeing to avoid prosecution, or custody or confinement after conviction, under the laws of the place from which the individual flees, for a crime, or attempt to commit a crime, which is a felony under the laws of the place from which the individual flees, or which, in the case of the State of New Jersey, is a high misdemeanor under the laws of such State; or (2) violating a condition of probation or parole imposed under a Federal or State law. SSI Provision of program recipient information to law enforcement officers Section 1611 of the Social Security Act Notwithstanding any other provision of law (other than section 6103 of the Internal revenue Code of 1986), the Commissioner shall furnish any Federal, State, or local law enforcement officer, upon the written request of the officer, with the current address, Social Security number, and photograph (if applicable) of any recipient of benefits under this title, if the officer furnishes the Commissioner with the name of the recipient, and other identifying information as reasonably required by the Commissioner to establish the unique identity of the recipient, and notifies the Commissioner that (ii) the recipient has information that is necessary for the officer to conduct the officer’s official duties; and (B) the location or apprehension of the recipient is within the officer’s official duties. Section 408 of the Social Security Act If a State to which a grant is made under section 403 establishes safeguards against the use or disclosure of information about applicants or recipients of assistance under the State program funded under this part, the safeguards shall not prevent the State agency administering the program from furnishing a Federal, State, or local law enforcement officer, upon the request of the officer, with the current address of any recipient if the officer furnishes the agency with the name of the recipient and notifies the agency that (i) the recipient has information that is necessary for the officer to conduct the official duties of the officer; and (ii) the location or apprehension of the recipient is within such official duties. Section 11 of the Food Stamp Act Notwithstanding any other provision of law, the address, social security number, and, if available, photograph of any member of a household shall be made available, on request, to any Federal, State. or local law enforcement officer if the officer furnishes the State agency with the name of the member and notifies the agency that (i) the member (I) is fleeing to avoid prosecution, or custody or confinement after conviction, for a crime (or attempt to commit a crime) that, under the law of the place the member is fleeing, is a felony (or, in the case of New Jersey, a high misdemeanor), or is violating a condition of probation or parole imposed under Federal or State law; or (II) has information that is necessary for the officer to conduct an official duty. (ii)Locating or apprehending the member is an official duty; and (iii) the request is being made in the proper exercise of an official duty; and (E) the safeguards shall not prevent compliance with paragraph (16) Section 27 of the Housing Act of 1937 Notwithstanding any other provision of law, each public housing agency that enters into a contract for assistance under section 6 or 8 of this Act with Secretary shall furnish any Federal, State, or local law enforcement officer, upon the request of the officer, with the current address, Social Security number, and photograph (if applicable) of any recipient of assistance under this Act, if the officer (1) furnishes the public housing agency with the name of the recipient; and (2) notifies the agency that (A) such recipient (i) is fleeing to avoid prosecution, or custody or confinement after conviction, under the laws of the place from which the individual flees, for a crime, or attempt to commit a crime, which is a felony under the laws of the place from which the individual flees, or which, in the case of the State of New Jersey, is a high misdemeanor under the laws of such State; or (ii) is violating a condition of probation or parole imposed under Federal or State law; or (iii) has information that is necessary for the officer to conduct the officer’s official duties; (B) the location or apprehension of the recipient is within such officer’s official duties; and (C) the request is made in the proper exercise of the officer’s official duties. PRWORA’s fugitive felon provisions have two objectives: (1) to ensure that fugitive felons do not receive SSI benefits and (2) to ensure that law enforcement agencies receive the information they need about SSI applicants and recipients to apprehend fugitive felons. To implement the law, SSA has designed a single process that can accomplish both objectives. By matching arrest warrant information it receives from federal, state, and local law enforcement agencies with its national SSI file, SSA is not only preventing fugitive felons from receiving SSI benefits, but also providing information about those in the SSI file it identifies as fugitive felons to law enforcement agencies so those individuals can be apprehended. SSA’s matching process requires the active involvement of its OIG as well as the involvement and cooperation from federal, state, and local law enforcement agencies. In developing this process, SSA and its regional offices as well as its OIG have had to locate sources of arrest warrant information, assess the adequacy of arrest warrant information from each source, and negotiate agreements with the law enforcement authorities that maintain this information to release it to SSA. SSA began to explore the potential for conducting such computer matching when it approached the U.S. Marshals Service and the Federal Bureau of Investigation (FBI) with proposals to match its SSI file with their arrest warrant information. The FBI’s National Crime Information Center (NCIC) database is the nation’s most extensive computerized criminal justice information system. NCIC consists of millions of records in several files, including files on wanted persons. As such, law enforcement agencies at the federal, state, and local levels may voluntarily submit their law enforcement information, including arrest warrants, to the FBI for inclusion in the NCIC database. SSA and its OIG secured memorandum of understanding with the FBI in March 2000 to provide SSA with monthly warrant information on fugitive felons from its NCIC Wanted Person File. Submission of information to the NCIC Wanted Person File is voluntary; however, so the file does not include complete arrest warrant data on fugitive felons, nationwide. According to SSA, as of June 2002, 17 states and the District of Columbia submit warrant data on all felonies and parole and probation violators to the FBI. Another 6 states report felonies but not parole and probation violators. However, because the other 27 states told SSA that the majority of their warrant information is not entered into the NCIC database, SSA has pursued negotiating agreement with these 27 states to obtain their warrant information. As of June 2002, 20 of the 27 states had agreed to provide SSA with their fugitive felon files. In exchange, once SSA has matched information from warrants with its SSI file, it provides the addresses of those in the file it finds with arrest warrants to the law enforcement agencies that have issued those warrants, so they can be apprehended. This matching and exchange process includes a number of SSA offices, such as its Office of Telecommunications and Systems Operations, its regional and field offices, and its OIG. To successfully implement this process requires cooperation and commitment from the FBI’s NCIC staff and its Information Technology Center (ITC), as well as law enforcement agencies at the state and local level. Figure 2 illustrates the steps in the process and those involved. Usually every month, SSA receives automated arrest warrant information from a variety of sources (step 1 in fig. 2). SSA had found that the information on arrest warrants is not always accurate. Fugitive felons frequently use aliases or provide law enforcement agencies with inaccurate Social Security numbers or dates of birth. In addition, SSA noted that law enforcement agencies tend to rely on fingerprints for identification because the Social Security numbers individuals report are often unreliable. As a result, law enforcement agencies do not always enter a Social Security number on a warrant. With the exception of the FBI, law enforcement agencies have agreed to provide SSA with arrest warrants that exclude misdemeanors, except in cases of probation or parole violations. To ensure that the misdemeanors are screened out, SSA’s OIG checks the warrant files the first time SSA receives them to verify that the misdemeanors have been removed. However, the FBI did not agree to screen out misdemeanors from its NCIC Wanted Person File, so an initial step in SSA’s process is to screen out warrants for misdemeanors in this file. Using its automated systems, SSA confirms the identity of the individual named on each warrant by comparing information on each warrant (including the individual’s name, Social Security number, date of birth, and gender) with information on the automated records SSA maintains on individuals. If there is no SSA record for a name on a warrant, or if there is no Social Security number in SSA’s records for the name on the warrant, the warrant is eliminated from the file (step 2). When the Social Security number on a warrant is incorrect, according to SSA’s records, or is missing from the warrant, SSA uses its automated systems to attempt to locate the correct or missing Social Security number. Finally, SSA uses its automated systems to eliminate misdemeanors from the warrants it obtains from FBI’s NCIC database, except in cases where the warrant is for a probation or parole violation (step 3). These steps help to ensure that SSI recipients are not mistakenly identified as fugitive felons. Next (step 4), SSA matches the remaining arrests warrant records with its computerized SSI file to identify SSI applicants and recipients named on warrants for felonies, or probation or parole violations. The results of this match are then forwarded to the SSA’s OIG (Office of Investigations) (step 5), which establishes its own investigative case file for each individual named on a warrant (step 6). Then, to provide information to law enforcement authorities for the apprehension of fugitive felons, the OIG forwards information from SSI records about those named on warrants to the FBI’s ITC (step 7). ITC screens out duplicate files and verifies the address and status of each individual named in the NCIC match warrant records to determine whether the warrant is active (step 8) and forwards these to the law enforcement agency that issued the warrant (step 9) so that it can locate and apprehend the individual (step 10). If warrants cannot be found in the NCIC database, ITC forwards them to the applicable law enforcement agency for verification and (step 9) so they can locate and apprehend the individual (step 10). After the appropriate law enforcement agency receives the addresses of individuals named on warrants, it has up to 60 days to notify ITC of the actions taken, if any, on the disposition of each individual’s case (step 11). Next, ITC updates the case file on each individual named on the active warrant to reflect the actions taken by law enforcement (step 12). Then, ITC forwards the information on the actions taken, if any, by law enforcement on each case to the OIG (step 13). Once SSA’s OIG has been notified or the 60 days have expired, whichever comes first, OIG reports those cases that require termination or recovery of SSI benefits to the appropriate SSA field office (step 14). The field office than takes whatever administrative actions, including due process safeguards, are required (step 15) and reports this back to SSA’s OIG (step 16). The OIG uses the information provided by the SSA field office on the administrative actions taken to update the individual’s case file (step 17). Although the matching process is complex and requires considerable cooperation and assistance from law enforcement authorities, that is, the FBI, U.S. Marshals Service, state, county, and local law enforcement authorities as well as SSA’s OIG, SSA believes that it is the most efficient and effective method for implementing both requirements of PRWORA’s fugitive felon provisions. The following people also made significant contributions to this report: William Hutchinson, Laura Luo, Susan Pachikara, Susan Bernstein, Jonathan Barker, Jay Smale, Vanessa Taylor, and Elsie Picyk.
In response to concerns that individuals wanted in connection with a felony or violating terms of their parole or probation could receive benefits from programs for the needy, Congress added provisions to the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 that prohibit these individuals from receiving Supplemental Security Income (SSI), Food Stamp benefits, and Temporary Assistance to Needy Families (TANF) and make fugitive felon status ground for the termination of tenancy in federal housing assistance programs. In addition, the act directs these programs to provide law enforcement officers with information about program recipients for whom there are outstanding warrants to assist in their apprehension. Actions taken to implement the act's fugitive felon provisions have varied substantially by program. In implementing provisions to prohibit benefits to fugitive felons, all but housing assistance programs include, at a minimum, a question about fugitive felon status in their applications. SSI and some state Food Stamp and TANF programs also seek independent verification of fugitive felon status by using computer matching to compare arrest warrant and program recipient files. To date, 110,000 beneficiaries have been identified as fugitive felons and dropped from the SSI, Food Stamp, and TANF rolls, and many have been apprehended. Computerized file matching has been responsible for the identification of most of these fugitive felons. Aggressive implementation of the act's fugitive felon provisions poses a number of challenges for programs. First, centralized and complete national and statewide arrest warrant data for computer matching are not readily available. Second, because direct access to arrest warrants and criminal records is limited to law enforcement personnel, computer matching requires what many state TANF and Food Stamp officials view as a burdensome and complex negotiation process to obtain these records. Third, the absence of information and guidance about how to conduct file matching and overcome its logistical challenges has also hindered aggressive implementation of the law. Finally, there is evidence that individuals with outstanding warrants for felonies, or probation or parole violations, may continue to collect benefits because there may be differences in the interpretation of what constitutes a fugitive felon within the Food Stamp and TANF programs.
The Department of Education’s FSA manages and administers student financial assistance programs authorized under title IV of the Higher Education Act of 1965, as amended. These postsecondary programs include the William D. Ford Federal Direct Loan Program (often called the Direct Loan program), the Federal Family Education Loan Program (often called the Guaranteed Loan program), the Federal Pell Grant Program, and campus-based programs. Annually, these programs together provide about $50 billion in student aid to approximately 8 million students and their families. During the past three decades, the Department of Education has created many disparate information systems to support these various student financial aid programs. In many cases, these systems—run on multiple operating platforms using different network protocols and maintained and operated by a host of different contractors—were unable to easily exchange the timely, accurate, and useful data needed to ensure the proper management and oversight of the various student financial aid programs. For example, as we reported in 1997, neither the National Student Loan Data System nor other systems were designed for efficient access to reliable student financial aid information, since many systems were incompatible and lacked data standards and common identifiers. In addition, because FSA used three separate systems to originate and/or disburse title IV funds, access to student and school data was fragmented and unreliable. As a result, FSA found it increasingly difficult to quickly access data to support day-to-day operational and management decisions, and schools could not easily access data to obtain a clear picture of the title IV student aid that had been disbursed. In September 1999, FSA issued its initial modernization blueprint, which was subsequently updated in July 2000, to transform the title IV student financial aid systems using technology. COD is one of four school service business processes in FSA’s blueprint and is intended to implement a simplified process for the operation of the Direct Loan and Pell Grant programs. According to FSA’s modernization blueprint, the common origination and disbursement process is composed of seven steps involving students, the Department of Education, and schools: (1) obtain applicant data, (2) determine eligibility, (3) determine award, (4) notify the Department of Education of the intent to disburse, (5) obtain funds from Education, (6) disburse funds to student, and (7) close out. A common process to support origination and disbursement is considered critical to FSA’s goal of achieving an enterprisewide solution that provides real-time data to students, schools, and financial partners via Web portals. To implement COD, FSA is using middleware and XML technologies. Specifically, middleware is being used to integrate FSA systems that support the COD process. Traditionally, systems integration would require building separate point-to-point interfaces between every two applications. Although this approach can be effective, it creates several problems, such as (1) every connection between two applications requires custom programming; (2) a lot of connections have to be developed when there are multiple data sources; and (3) whenever the logic or data in one application changes, the accompanying interface often also needs to be altered. Middleware represents an alternative means to the traditional approach, and it can provide a quicker and more robust solution to systems integration. In essence, middleware separates the business application from the technical details of interapplication communications. Thus, middleware can simplify and reduce the number of interfaces for multiple systems because it can handle differences in data formats and record layouts. As part of the COD process, XML is being used to consolidate multiple legacy record formats previously used by schools to submit data on the Pell Grant and Direct Loan programs. By using an XML-based common record, schools can transmit one file with all of the student’s data instead of submitting separate legacy records with redundant student and school information. Appendix I provides a high-level depiction of the systems and technologies supporting the COD process as of November 2002. As depicted, the COD system can translate or convert legacy records by using middleware. In addition, middleware has been built into several existing systems so that they can establish connectivity and exchange data with the COD system through a common IT infrastructure. This IT infrastructure, called the Enterprise Application Integration (EAI) bus, is also implemented using middleware to route data between systems in a correct format. In addition, as part of the COD process, some schools have begun submitting Pell Grant and Direct Loan data using the XML-based common record. FSA hired Accenture as its “modernization partner” to help carry out its modernization blueprint, including the implementation of the COD process. Accenture is the prime contractor providing leadership of critical planning activities that are essential to the success of FSA’s modernization. Regarding the COD system part of FSA’s modernization, FSA also hired an independent verification and validation contractor to review the initial release of this system, which was completed earlier this year. FSA has made progress in implementing the new COD process. In particular, it has begun implementing (1) its middleware solution in its IT infrastructure and various existing systems, (2) the COD system, and (3) an XML-based common record. However, FSA’s implementation of COD is behind schedule, and critical work remains to be completed. For example, the basic COD system was to be completed by mid-October 2002; however, only about three-quarters of the COD basic system requirements had been implemented as of October 23, 2002. In addition, FSA is not tracking whether it is achieving certain benefits because it is still in the process of defining applicable metrics to measure progress. Without such tracking processes, FSA lacks critical information about whether it is achieving expected benefits. Finally, FSA lacks assurance that it has captured and disseminated important lessons learned related to schools’ implementation of the common record because it believes that its current ad hoc process is adequate. Accordingly, the thousands of schools that have not yet implemented the common record may not benefit from the experience of those that have. FSA has made progress in implementing COD. The following are significant elements of the COD process that have been implemented: Deployment of the EAI bus. As a prerequisite to implementing COD, in late October 2001, FSA deployed its middleware solution in an EAI “bus”—an IT infrastructure that uses middleware to access data from disparate systems, transform the data formats as necessary, and route the data to the appropriate requesting systems, thus enabling data exchange among disparate systems. The EAI bus provides the set of technical capabilities necessary to integrate FSA’s disparate systems. Initial implementation of basic COD system (release 1.x). On April 29, 2002, FSA went live with version 1.0 of the basic COD system. As of mid- November 2002, FSA had released an additional five sub-versions of the COD system (e.g., version 1.1). The COD system replaces the Direct Loan Origination System and the Recipient Financial Management System, and it currently processes files for all schools participating in the Pell Grant and Direct Loan programs. According to FSA, in the first 6 months of its operation, the COD system processed just under 16 million transactions, representing Pell Grant and Direct Loan awards totaling almost $10 billion to over 5 million recipients. Implementation of middleware in selected systems. As of mid-November 2002, FSA had built middleware into seven systems so that these systems can interact with the COD system through the EAI bus. These systems include (1) the Central Processing System, which determines students’ eligibility and award levels, and (2) the National Student Loan Data System, which contains loan- and grant-level information and is used by schools to screen student aid applicants to identify ineligible borrowers. Development and implementation of the common record. Using XML, FSA developed and began implementing a common record that schools can use to submit student financial aid data to the COD system. The common record, designed with assistance from members of the National Council of Higher Education Loan Programs and the Postsecondary Electronic Standards Council, consolidates multiple legacy file formats previously used by the Pell Grant and Direct Loan programs. Although FSA has made progress in implementing the COD process, critical work remains to be completed. First, FSA is behind schedule in implementing the basic COD system. Although FSA had planned to complete the basic COD system by mid-October 2002, only about three- quarters of the COD basic system requirements had been implemented as of October 23, 2002. For example, as of early November 2002, one of the basic business functions that remains to be implemented is to enable FSA to make automated adjustments in batches to school current funding levels. FSA now estimates that most of the remaining functionality will be completed by the end of September 2003. According to FSA IT and program officials, the implementation of the basic COD functionality was delayed to allow adequate time for testing to ensure the quality of the system. Second, as of November 19, 2002, Accenture reported several operational problems that needed to be addressed. For example, in some cases, the COD system was incorrectly processing school batch data that contained multiple change records for an individual student. According to COD and contractor officials, the causes of operational problems included unclear requirements and software design defects. An independent verification and validation contractor also found problems with the requirements and design aspects of release 1.0. The COD Contracting Officer’s Representative characterized these operational problems as very serious and stated that they could impede operations and the delivery of future COD releases. This same official noted that FSA and Accenture are currently undertaking efforts to address these problems. For example, FSA has established production teams composed of agency and contractor staff to address problems in specific areas. In addition, FSA has established a continuous improvement process to more rigorously manage its relationship with Accenture. Third, fewer postsecondary schools than planned have implemented the common record. FSA had estimated that 50 schools (out of about 5,500) would implement the common record in fiscal year 2002. However, as of November 26, 2002, only 22 schools had implemented and tested the common record with FSA. FSA COD officials attributed the fewer-than- expected number of schools using the common record to schools and vendors not being ready to implement it. FSA expects that the number of schools using the common record will be considerably higher during the next award year (2003–2004) because, by April 2003, it plans to implement and test the common record with EDExpress, a software application FSA distributes free of charge to about 3,000 schools for use in submitting data. In addition, FSA expects that all schools will be using the common record format by March 2004, in time for the 2004–2005 award year. In its COD business case, FSA outlined five expected benefits: (1) reduced cost, (2) increased customer satisfaction, (3) increased employee satisfaction, (4) increased financial integrity, and (5) the integration and modernization of legacy systems. An important aspect of implementing an IT investment cited by the Office of Management and Budget and our IT investment management guide is evaluating the results of the investment by determining whether such expected benefits are being achieved. However, as illustrated in table 1, at this time FSA has only some of the data necessary to determine whether it is achieving all expected benefits. In particular, for the increased customer satisfaction and financial integrity benefits, FSA (1) has not fully defined the performance metrics to be used, (2) does not have all baseline data, and/or (3) is not fully tracking whether the benefits are being achieved. In these cases, FSA COD officials stated that they were in the process of developing relevant metrics, which would be tracked to measure the project’s performance against expected benefits. However, until FSA develops these data and begins tracking actual benefits and comparing them with expected benefits, it will lack vital data with which to demonstrate actual investment results. FSA IT officials also stated that they plan to have a contractor conduct a postimplementation review of the COD basic system in fiscal year 2003, which is expected to look at the achievement of expected benefits. While this is an important initiative that could provide FSA with valuable information, it does not take the place of a continuing and systematic process of tracking actual benefits. According to our IT investment management guide, another critical activity is establishing a process for developing and capturing lessons learned in a written product or knowledge base and disseminating them to decision-makers. Lessons-learned mechanisms serve to communicate acquired knowledge more effectively and to ensure that beneficial information is factored into planning, work processes, and activities. Lessons learned can be based on positive experiences that save money or on negative experiences that result in undesirable outcomes. FSA has recognized the importance of generating lessons learned in certain areas. For example, it has implemented a process for developing lessons related to managing the relationship between the agency and its prime contractor. However, FSA lacks such a process for capturing or disseminating lessons related to school migration to the common record. FSA COD officials stated that lessons learned pertaining to school migration to the common record are addressed through periodic discussions during biweekly conference calls with schools undergoing testing with FSA and during portions of various FSA-sponsored conferences. FSA COD officials stated that they believed this process for capturing and disseminating lessons learned was adequate. However, by relying on such an ad hoc process, FSA lacks assurance that it has captured and disseminated all key lessons learned related to schools’ implementation of the common record and could overlook important improvements that could be made. In addition, schools that do not attend the conferences may not receive and benefit from the lessons identified in the initial phase of implementation. As a result, schools may encounter problems that could have been avoided or mitigated had they known of other schools’ experiences. This could hamper FSA’s ability to facilitate the transition of schools to the new common record and thus the agency’s ability to fully implement the new COD process and achieve the expected benefits. In commenting on a draft of this report, FSA stated that it plans to provide lessons learned as part of a planned update to its school testing guide. While this is a positive step, it does not replace the need for mechanisms to continuously capture and disseminate acquired knowledge as schools implement the common record. Table 2 includes examples of lessons learned provided by FSA at our request that were drawn from schools’ initial implementation of the common record for the 2002–2003 award year. Such information would be important for the thousands of schools that have not yet implemented the common record so that they can avoid problems during the common record implementation and testing processes. FSA has taken important steps toward achieving full implementation of the new COD process. However, critical actions, such as completing the basic functionality of the COD system and the implementation of the common record at thousands of affected schools, must still be undertaken. In addition, FSA has not yet fully established the metrics and processes to track actual benefits related to all of its expected benefits or the lessons that have been generated by the few schools that have implemented the common record thus far. By not tracking actual benefits, FSA lacks information that is critical to determining whether it is meeting all of its goals. Further, not capturing and disseminating information to schools regarding lessons learned could make achieving these goals more difficult. To determine the extent to which the new COD process is achieving expected results related to customer satisfaction and financial integrity, we recommend that you direct FSA’s Chief Operating Officer to expeditiously develop metrics and baseline data to measure these benefits and develop a tracking process to assess the extent to which the expected results are being achieved. To ensure that the schools that have not yet implemented the common record benefit from the experiences of those that have, we recommend that you direct FSA’s Chief Operating Officer to establish a process for capturing lessons learned in a written product or knowledge base and for disseminating them to these schools. In providing written comments on our draft report, FSA’s Chief Operating Officer provided technical comments and updated information, but did not comment on our recommendations. Specifically, The Chief Operating Officer did not believe the report adequately portrayed the level of COD progress that had been made. In particular, she took issue with our using the completion of 75 percent of COD requirements as an indication of progress. Although the Chief Operating Officer did not disagree with the accuracy of this figure, she stated that FSA’s informal analysis indicated that between 85 to 90 percent of COD functions had been implemented, which she believed was a better gauge of progress. We believe that we have accurately portrayed FSA’s progress in implementing the COD process. First, since FSA’s analysis was “informal,” and FSA’s supporting documentation had limited detail that we could not validate, we do not agree that this should be the primary basis for an analysis of COD’s progress. Second, we included both the percentage of COD’s requirements that had been implemented and FSA’s estimate in our report. Nevertheless, we modified our report to include additional data provided by FSA regarding the number of transactions processed by the COD system to further indicate progress. The Chief Operating Officer agreed that the tracking of all of the expected benefits is not in place at this time, but stated that work is under way in this area. FSA also provided updated information and supporting documentation related to the tracking of some of the expected benefits. We made changes to the report reflecting this new information, as appropriate. The Chief Operating Officer agreed that it is important and beneficial to communicate lessons learned, but stated that FSA’s informal method for communicating lessons related to school migration to the common record worked well in the first year of COD implementation. FSA also noted that it plans to include lessons learned in a planned update to its school testing guide. We modified the report to reflect this initiative, but we do not agree that FSA’s informal method or its plan to include lessons learned in its testing guide is adequate because these approaches do not provide a continuous process for actively capturing and disseminating lessons learned. As a result, some important lessons may be overlooked, and all schools may not be aware of potential problems associated with implementing the common record. FSA’s written comments, along with our responses, are reproduced in appendix II. We are sending copies of this report to the appropriate congressional committees, the Secretary of Education, the Chief Operating Officer of Education’s Office of Federal Student Aid, and the Director of the Office of Management and Budget. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286 or Linda J. Lambert, Assistant Director, at (202) 512-9556. We can also be reached by E-mail at pownerd@gao.gov and lambertl@gao.gov, respectively. Other individuals making key contributions to this report included Jason B. Bakelar and Anh Q. Le. Legacy records (MQSeries) (MQSeries) (MQSeries) Single common record file (XML) 18 legacy files (secured FTP) FTP File Transfer Protocol LOWeb Loan Origination Web NSLDS National Student Loan Data System PEPS Postsecondary Education Participants System XML Extensible Markup Language This is a temporary interface. DLOS is targeted to be retired in fiscal year 2003. MQSeries is IBM’s proprietary message format. CThis is an information technology infrastructure that enables data exchange among disparate systems. The following are GAO’s comments on the Department of Education’s Office of Federal Student Aid’s letter dated December 18, 2002. 1. We revised the draft report title to clarify that our follow-up review was focused on assessing FSA’s progress in implementing the COD process. 2. Information related to operational problems was contained in the draft report. We asked the COD Contracting Officer’s Representative to characterize these problems, and he stated that they were very serious. In addition, we confirmed the seriousness of these problems at the conclusion of our review with FSA IT and program officials. 3. We modified this report to include data on the number of transactions processed. We also modified our report to clarify that all schools participating in the Pell Grant and/or Direct Loan programs currently use the COD system. 4. We do not agree that the report implies that FSA’s use of a phased-in approach in implementing the common record increases risks. Instead, this report notes that the implementation is not yet complete. 5. We believe that we have accurately portrayed FSA’s progress in implementing the COD process. First, since FSA’s analysis was “informal,” and FSA’s supporting documentation had limited detail that we could not validate, we do not agree that the COD Development Manager’s functionality estimate should be the primary basis for an analysis of COD’s progress. Second, we included both the percentage of COD’s requirements that had been implemented and FSA’s estimate in our report. 6. We modified our report to reflect this updated information as appropriate. 7. We do not agree that FSA’s informal process for capturing and disseminating lessons learned was adequate because (1) it may lead to important lessons being overlooked and (2) all schools may not be aware of potential problems associated with implementing the common record. 8. We modified this report to reflect that FSA plans to include lessons learned in a planned update to its school testing guide. While this is a positive step, it does not replace the need for mechanisms to continuously capture and disseminate acquired knowledge as schools implement the common record.
To address system problems and other long-standing management weaknesses, in 1998, the Congress created a discrete unit within the Department of Education, the Office of Federal Student Aid (FSA). This office subsequently adopted a new approach to systems integration using middleware (a type of software that can allow an application to access data residing in different databases) and Extensible Markup Language (XML)--a flexible, nonproprietary set of standards that is intended to make it easier to identify, integrate, and process information widely dispersed among systems and organizations. FSA's first use of this approach is the Common Origination and Disbursement (COD) process for the Direct Loan, Pell Grant, and campus-based programs. GAO initiated a follow-up review to assess FSA's progress in implementing this process. FSA has made progress in implementing the COD process. Specifically, it has implemented (1) a new information technology infrastructure that uses middleware to enable data exchange among disparate systems; (2) the initial version of the basic COD system, which replaces two existing systems and is being used by schools participating in the Pell Grant and Direct Loan programs; (3) middleware into existing systems to support the COD process; and (4) a common record based on XML that schools can use to submit student financial data for the Pell Grant and Direct Loan programs. However, the implementation of the COD process is behind schedule, and its ultimate success hinges on FSA's completing critical work, including addressing serious postimplementation operational problems, and having thousands of postsecondary schools implement the common record. Further, there are important elements to managing any information technology investment that FSA has not yet completed: Determining whether expected benefits are being achieved. FSA has only some of the metrics, baseline data, and tracking processes necessary to determine whether it is achieving all expected benefits. Tracking lessons learned. FSA has relied on an ad hoc approach for gathering and disseminating lessons learned related to schools' implementation of the common record. To address this issue, FSA plans to include lessons learned as part of an update to its school testing guide. However, this does not replace the need for an ongoing mechanism to capture and disseminate lessons learned, without which schools may encounter problems that could have been avoided or mitigated.
VA spent about $21 billion to provide health care services, including acute medicine, surgery, mental health, and long-term care, to about 4.2 million veterans during fiscal year 2001. Of VA’s 4,700 buildings, over 40 percent have operated for more than 50 years, including almost 200 built before 1900. Over 1,600 buildings have historical significance that requires VA to comply with special procedures for maintenance and disposal. VA’s health care infrastructure was designed and built to reflect a concept of hospital-centered inpatient care, with long stays for diagnosis and treatment. This concept is now outdated as new technology and treatment methods have shifted delivery from inpatient to outpatient services where possible and shortened lengths of stay when hospitalization is required. As a result, VA’s capital assets often do not align with current health care needs for optimal efficiency and access. To address this situation, CARES will assess veterans’ potential demand for health care over the next 20 years, identify potential service gaps and develop delivery options for meeting veterans’ needs, and guide the realignment of capital assets to support the preferred delivery options. VA conducted a pilot test in the Great Lakes network, which served about 220,000 veterans in fiscal year 2001 with an annual budget of $891 million. This network includes three general market areas: northern Illinois (Chicago), Wisconsin, and the Upper Peninsula of Michigan. In February 2002, the Secretary of Veterans Affairs selected strategies for realignment of services. These strategies included (1) consolidation of services at existing locations, (2) opening of new outpatient clinics, and (3) closure of one inpatient location. Subsequently, VA identified 30 vacant buildings that were no longer needed to meet veterans’ health care needs. Of the 30 buildings, 11 are considered to be historic. Under the provisions of the National Historic Preservation Act, federal agencies are required to take into account the effect of any federal undertaking on any historic property. Until a decision is made on demolition, agencies that own or control historic properties are required to preserve their historic character and minimize harm to them. The act also establishes federal agency responsibilities that must be met if historic properties are to be demolished. During fiscal year 2001, officials in VA’s Great Lakes network told us that an estimated $750,000 was spent to maintain vacant buildings, primarily for utilities. Network officials told us that this represents a relatively small portion of the total resources needed to adequately operate these buildings for the delivery of health care or other purposes. Actual expenses were lower because the buildings are no longer used for health care. In general, the network considered three options when developing property disposal or other plans for vacant buildings: Enhanced-Use Leasing, demolition, or transferring the property to GSA, which has the authority to dispose of excess or surplus federal property under the Federal Property and Administrative Services Act of 1949 (Property Act). Under Enhanced-Use Leasing, VA may lease property to others for up to 75 years; it may transfer title to the lessee at some time during the life of the lease if such transfer is in the best interests of the federal government. Demolition is a viable option when the associated costs can be recovered within a reasonable period, primarily through the avoidance of maintenance costs. If VA reports the property to GSA as excess, GSA identifies potential users for the property by going through several levels of screenings that evaluate users in the following order of priority: federal users; organizations that will use the property for homeless programs under the Stewart B. McKinney Homeless Assistance Act; nonprofit organizations that may want the property for public uses such as parks, museums, or educational facilities; and state or local governments. If none of these screening processes produce a user, the site is made available for public sale. Following the pilot test in the Great Lakes network, VA made significant modifications to its CARES procedures, including development of a more systematic process to guide decisions involving the management of vacant buildings. For example, networks will use a common format for estimating future maintenance costs, as well as potential demolition costs. However, the model does not include costs associated with the transfer of property to GSA nor the potential revenue that could be realized. VA has negotiated Enhanced-Use Leases for 10 vacant buildings and is negotiating Enhanced-Use Leases for 3 buildings. Four buildings have been demolished, and 4 additional buildings will be demolished. VA currently has no disposal plans for the other 9 buildings. In April 2002, VA contracted with the Chicago Medical School for an Enhanced-Use Lease of 10 vacant buildings at VA’s North Chicago health care delivery location. The medical school will either renovate or demolish these buildings and in return will purchase utilities, including steam, electricity, and chilled water, from a VA-operated facility. In addition to generating revenue from the sale of utilities, the network will avoid annual maintenance costs of over $440,000. VA is negotiating an Enhanced-Use Lease with Catholic Charities of Chicago for 3 vacant buildings at the Hines VA hospital in Chicago. Two of the three were considered historic; VA network officials took steps to have the historic designation removed. VA expects Catholic Charities to renovate the buildings to make them suitable for transitional housing for the homeless. VA also expects to receive rental payments as well as reimbursement for utilities, grounds maintenance, and snow removal. In addition, VA is negotiating with Catholic Charities to use at least 50 percent of the housing for veterans who need this service. Network officials told us that utilities were turned off and that no funds were spent on these buildings for other purposes during fiscal year 2001. Four buildings have been demolished, and 4 others will be demolished. At the Chicago Health Care System’s West Side Division, the Enhanced-Use Lease partner demolished 3 buildings in November 2002 to provide space for a new parking garage and a Veterans Benefits Administration regional office. The U. S. Navy demolished 1 building at North Chicago on land that VA transferred to it for future use. Four other buildings will be demolished because they present safety hazards or the land is needed to expand existing VA facilities, including cemeteries. These buildings are located at the Milwaukee health care facility and Hines VA hospital. Two of these buildings, located at Milwaukee, are historic. The other two buildings are at Hines. One of the two was considered historic. Network officials told us they were successful in having the historic designation removed. This building will be demolished in order to construct a surface parking lot for a new spinal cord injury/blind rehabilitation center. During fiscal year 2001, VA spent about $17,000 to operate and maintain these 4 buildings. Despite the efforts of network officials, the lack of interest in 9 of VA’s vacant buildings has been an obstacle to finding alternate uses for these buildings. Network officials believe that maintaining ownership of the vacant buildings is the least expensive course of action, given the relatively high demolition costs compared to annual maintenance costs and considerable uncertainties concerning VA’s potential costs to transfer the properties to GSA. Network officials told us that they have attempted to interest outside organizations in utilizing the 9 vacant buildings without success. For example, officials at the medical center in Tomah, Wisconsin, offered to transfer ownership of a 23,579-gross-square-foot building to a local Indian tribe for use as office space and an outpatient clinic. The building, which was constructed in 1929, has been vacant since 1983. According to VA, the offer was turned down because of the $2 million cost of renovations needed to make it suitable for this purpose. The medical center director told us that because Tomah is located in a rural area, it has been difficult to find other organizations interested in this building and its two other vacant buildings. Likewise, officials at the Milwaukee medical center told us that they have had discussions with other organizations concerning use of 6 vacant buildings. They have tried to generate interest in the buildings as elderly housing, as office space, and for a youth home. These officials suggested that two of the vacant buildings, a theater and a chapel, could, when renovated, be used for these purposes if interested parties could be found. They told us they have held discussions with other government agencies, school organizations, a labor union, and charitable organizations without success. Network officials cited a second obstacle, namely that the cost to demolish the 9 vacant buildings could not be recovered through avoidance of maintenance costs, such as utilities, within a reasonable period. For example, the network determined that the cost to demolish 3 of these 9 vacant buildings would be about $500,000, while maintenance costs for the 3 buildings were about $26,000 during fiscal year 2001. The shortest recovery period was about 11 years for 1 of the 4 buildings. This 33,910-square-foot building, located in Tomah, Wisconsin, has been vacant since 1998. According to VA, the cost to demolish this building would be $212,000. During fiscal year 2001, the medical center spent about $18,600 for utilities for this building. By contrast, demolition costs for 1 building would likely take over 40 years to recover. This 23,579-gross- square-foot building has been vacant at the Tomah medical center since 1983. During fiscal year 2001, the medical center spent about $7,000 to maintain this building. According to VA, the cost to demolish this building would be $308,000. In addition, network officials cited the uncertainty of potential costs as the third obstacle in declaring the 9 buildings excess property under the provisions of the Property Act. First there is no assurance that VA could save money given that property-holding agencies, such as VA, incur costs in disposing of excess property with GSA. Property-holding agencies are generally responsible for mothballing and stabilizing property in order to prevent its further deterioration pending transfer to another federal agency or its disposal. According to GSA, the landholding agency is also responsible for studies to detect the presence of hazardous wastes as well as archeological sites. GSA officials also told us that they are committed to maintaining the best and highest use for the property and that historic property will be transferred only under covenants that protect its historic designation; all 9 buildings are considered historic. According to a network official, the Great Lakes network has not determined whether the cost of transferring these excess buildings to GSA exceeds the cost of continuing to own and maintain them. Second, VA does not consider the transfer of vacant buildings to GSA (by declaring them excess) to be an attractive option. This is because proceeds that are received from the sale of real property must be deposited into the VA Nursing Home Revolving Fund, which is only to be used for the construction of nursing homes. VA would prefer to use these proceeds for the delivery of inpatient and outpatient services for veterans as well as long-term care. VA officials told us that they had proposed legislation that would allow VA to use sales proceeds to support veterans’ health care delivery, but it was not enacted. As a result, VA would prefer to pursue Enhanced-Use Leases, which will allow VA to use revenue to meet the overall health care needs of veterans. Officials in VA’s Great Lakes network have made progress dealing with vacant buildings that are no longer needed in the delivery of health care to veterans. When there is no Enhanced-Use Lease potential, however, these officials have encountered several obstacles, including potentially high demolition costs or uncertain site preparation costs associated with reporting buildings to GSA as excess to VA’s needs. Understandably, they are reluctant to commit potentially large amounts of health care resources for the demolition or site preparation without sufficient assurance that most or all costs will be recovered. The Great Lakes network has retained ownership of 9 vacant buildings and thus continues to spend medical care resources to maintain them. As the CARES process is completed in the 20 remaining networks, costs associated with an increasing number of unneeded buildings that will be identified will grow. Therefore, it is critical that VA take the steps needed to systematically evaluate all relevant cost information. VA’s recent changes to the CARES process provide a framework for making effective decisions, although since the changes have not been tested, it remains unclear whether they will function as an effective model that includes complete cost information concerning options to dispose of or find alternate uses for vacant buildings. To ensure that the newly developed CARES model for managing excess buildings will provide an effective decision-making tool that could be used in the other networks, we recommend that the Secretary of Veterans Affairs conduct a pilot test of the model in the Great Lakes network and make modifications, if needed. In commenting on a draft of this report, VA agreed with our findings and conclusions and concurred with our recommendation. VA’s letter is reprinted in appendix II. We modified the report to use the term “Enhanced Use Leasing,” as VA suggested. We also incorporated VA’s technical comments as appropriate. VA also emphasized that it had proposed legislation that would allow VA to use sales proceeds to support veterans’ health care delivery, but it has not been enacted. Also, VA expressed concern that the process for removing buildings from historic preservation status is a significant obstacle when it attempts to find alternate use for or dispose of all remaining buildings. We agree that this process complicates VA’s ability to manage vacant buildings, but as we stated in our report, VA has been successful in removing the historic designation of buildings in the Great Lakes network in order to facilitate demolition or alternate use. Factors such as the constraints on the ability to retain proceeds from the sale of real property and the need to address historical building issues are shared by many real property-holding agencies. We discuss the factors associated with excess property in the federal government as a whole in a soon-to-be- released report on longstanding problems in the federal real property arena. We are sending copies of this report to the Secretary of Veterans Affairs and other interested parties. We will also make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please call me at (202) 512-7101. Key contributors to this report were Paul Reynolds, Behn Miller, and John Borrelli. To assess the Department of Veterans Affairs’ (VA) efforts to manage unneeded, vacant buildings, we obtained information from the Great Lakes network on the number of such buildings, the cost to maintain the buildings, and its efforts to find alternate uses for the buildings. We asked for information such as the age of the buildings, the year in which they became vacant, the cost of utilities and other operating costs, as well as the cost of any needed repairs. We also asked about the network’s plans to manage these buildings through such actions as demolition or Enhanced- Use Lease. After we received this information we visited the network and interviewed the Director and other network staff members about their efforts to deal with unneeded, vacant buildings. We discussed with these officials their plans for implementing Capital Asset Realignment for Enhanced Services (CARES) options selected by the Secretary. We visited the Hines, Milwaukee, and North Chicago hospitals. During our visits we met with hospital directors, associate directors, and their staffs. We discussed with these officials their actions to find alternate uses for the buildings and problems they have encountered in doing so. By telephone, we discussed with the Director of the Tomah hospital and members of his staff information on the hospital’s vacant buildings. At the Milwaukee and North Chicago hospitals, we visually inspected vacant buildings. We did not tour vacant buildings at Hines because of building safety concerns. At VA headquarters, we met with officials to discuss the CARES process and VA’s plans for managing vacant buildings. We reviewed CARES planning documents, including information supporting the network’s August 2002 realignment decisions. We also met with VA’s Historic Preservation Officer to discuss the impact of historic significance on VA’s ability to take actions on unneeded vacant buildings. We met with General Services Administration officials to discuss the process for disposing of excess property as well as proposed legislation aimed at improving federal agencies’ ability to manage federal property. We also discussed management of historic properties with officials at the National Trust for Historic Preservation. We performed our review from January 2002 through January 2003 in accordance with generally accepted government auditing standards.
The Department of Veterans Affairs (VA) has changed from a hospital-based system to primary reliance on outpatient care. As a result, VA expects that the number of unneeded buildings will increase. Veterans' needs could be better served if VA finds ways to minimize resources devoted to these buildings. VA must have an effective process to find alternate uses or dispose of unneeded property. In August 2002, VA completed a pilot test for realigning its health care system in the Great Lakes network. The pilot identified 30 buildings that are no longer needed to provide health care to veterans. VA is currently studying how to realign assets in its 20 remaining networks. GAO was asked to review VA's management of unneeded buildings in its Great Lakes network. The Great Lakes network has developed or implemented alternative use or disposal plans for 21 of the 30 unneeded, vacant buildings. VA has leased 10 of the buildings to the Chicago Medical School and is negotiating a lease for 3 buildings with Catholic Charities of Chicago. Four buildings were demolished, and 4 buildings will be demolished in order to construct new facilities or to expand an existing cemetery. The network identified three obstacles that hinder alternative use or planning for the remaining buildings: (1) VA has been unable to find organizations interested in using the vacant, unneeded buildings due primarily to their location or physical condition; (2) VA may spend more to demolish buildings than it would spend to maintain the buildings as is; and (3) VA is reluctant to transfer disposal responsibility for the buildings to the General Services Administration, primarily because (a) VA would incur costs for environmental and other requirements that could exceed potential savings through avoidance of routine maintenance costs, and (b) any proceeds may only be used for the construction of VA nursing homes.
In recent years, Congress passed two pieces of legislation intended, in part, to foster greater coordination among education, welfare, and employment and training programs. The Workforce Investment Act (WIA) was passed in 1998 to consolidate services for many employment and training programs, requiring states and localities to use a centralized service delivery structure—the one-stop center system—to provide most federally funded employment and training assistance. States and localities had been developing one-stop centers prior to WIA, helped in part by One- Stop grants from the Department of Labor (Labor), but they were not required to do so until the passage of WIA. The Temporary Assistance for Needy Families (TANF) block grant, created two years earlier by the 1996 Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA), allowed states and localities greater flexibility than ever before in designing employment and training services for clients receiving cash assistance. While TANF is not one of 17 federal programs mandated to provide services through the one-stop system, states and localities have the option to include TANF as a partner. GAO’s prior work on pre-WIA programs found that states varied in the degree to which employment and training services for TANF clients were being coordinated through the one-stop system. For well over a decade, states and localities have engaged in efforts to integrate services for their employment and training programs. In fiscal year 1994, Labor helped them in their efforts when it began awarding One- Stop Planning and Implementation grants, requiring states to include most Labor-funded programs in the new one-stop centers in order to receive the grants. The key objectives of Labor’s one-stop initiative, aside from integration, were to create a system that was customer-driven and accountable for its outcomes and that made its core services available to all job seekers. By 1998, all 50 states had received at least some one-stop planning or implementation grant funds. When WIA was enacted, it expanded the use of the one-stop system, requiring states and localities to use this once optional service delivery structure to provide many other employment and training services. In implementing WIA, Labor continued to promote the key objectives of the earlier one-stop initiative while emphasizing state and local flexibility and a strong role for the private sector on new, local boards that oversee the program. WIA also extended the one-stop concept beyond Labor programs, requiring states and localities to form partnerships with other agencies offering employment and training services. About 17 categories of programs, funded through four federal agencies—the Departments of Labor, Education, Health and Human Services, and Housing and Urban Development—must provide services through the one-stop center system under WIA. WIA does not require that all program services be provided on site (or colocated)—they may be provided through electronic linkages with partner agencies or by referral—but WIA does require that the relationships and services be spelled out in a Memorandum of Understanding between the partners. While several programs are required by WIA to provide services through the one-stop centers, others have been left to the discretion of state and local officials, including the TANF block grant program. State and local flexibility is also a key feature of the TANF program, which was passed by Congress two years before WIA. Under TANF, states have more flexibility than under its predecessor programs to determine the nature of financial assistance, the types of client services, the structure of the program, and how services are to be delivered. At the same time, TANF established new accountability measures for states—focused in part on meeting work requirements—and a 5-year lifetime limit on federal TANF assistance. These measures heighten the importance of helping TANF recipients find work quickly and retain employment. As states have used the new flexibility under TANF and have focused more on employment, the importance of coordinating services for TANF clients has received increased attention. To help clients get and retain jobs, states need to address problems that may interfere with employment, such as child care and transportation issues and mental and physical health problems. Frequently, solving these problems requires those who work directly with clients to draw on other federal and state programs, often administered by other agencies, to provide a wide array of services. While local welfare agencies have typically administered TANF, Food Stamps, and Medicaid, other programs that provide key services to TANF clients are administered by housing authorities, education agencies, and state employment services offices. TANF’s focus on employment means that welfare agencies may need to work more closely than before with state and local workforce development systems. In the past, under the Work Incentive program, welfare agencies and workforce development systems collaborated at some level, but our previous work on pre-WIA programs found wide variation in the degree to which the welfare and nonwelfare programs worked together to provide employment and training services. State and local efforts to coordinate their TANF and WIA programs increased in 2001, at least one year after all states implemented WIA. Nearly all states reported some coordination at the state or local level, achieved with methods ranging from informal linkages (such as information sharing or periodic program referrals) to formal linkages (such as memoranda of understanding), shared intake, or integrated case management. Coordination of TANF-related services with one-stop centers increased from 2000 to 2001, and the form of coordination—colocation of services, electronic linkages or client referral—was based, in part, on the type of services provided—TANF work, TANF cash assistance, or support services—as well as state and local preferences and conditions. Modest increases in states’ efforts to coordinate the management of TANF and WIA programs occurred between 2000 and 2001. Twenty-eight states reported that in 2001 they made extensive use of formal linkages, such as memoranda of understanding and state-level formal agreements, between the agencies administering TANF and WIA, compared with 27 states in 2000. Similarly, states increased their use of coordinated planning in 2001, with 19 states reporting that they used it to a great extent compared with 18 states in 2000 (see figure 1). When we looked at states individually, we saw that many were using additional coordination methods in 2001. Seventeen states indicated that the number of the state-level coordination methods they used to a great extent increased in 2001. In fact, in 2001, nine states used all five of the coordination methods that we analyzed—formal linkages, shared performance measurement and reporting, interagency and intra-agency workgroups, coordinated planning, and informal linkages and interagency communication (such as sharing program information)— up from 7 states in 2000. Increased coordination between TANF and WIA programs was also seen in the use of TANF funds to support one-stop center infrastructure or operations or both. The number of states using TANF funds to support one-stop centers increased to 36 in 2001 from 33 in 2000. In addition, the number of states ranking TANF as one of the three largest funding sources for their one-stop centers rose to 15 from 12. Some of the largest gains in program coordination between 2000 and 2001 were seen at the local level, with the most dramatic changes occurring in informal linkages, such as periodic program referrals or information services. Forty-four states reported that most of their one-stop centers had informal linkages with their TANF programs in 2001, compared with 35 states in 2000 (see figure 2). Similarly, 16 states reported that most of their one-stop centers had shared intake or enrollment systems in 2001—up from 13 in 2000; and 15 states reported in 2001 that they used an integrated case management system in most of their one-stop centers—an increase of 1 state from our 2000 results. Also, our analysis suggests that more coordination methods are in use at the local level. The number of states that reported that most of their one-stop centers used all seven methods of local-level coordination increased in 2001 to 10 states from 7 in 2000.Some of these coordination methods have the potential to reduce the administrative burden on both clients and staff by decreasing the number of applications that clients must complete and eliminating the need for staff to enter similar client information into several systems. For example, one locality in Connecticut cross-trained staff to provide both TANF and WIA services and developed an integrated case management system so that one case manager could track clients across both TANF and WIA programs, in an effort to reduce the amount of time that staff needed to spend on administrative tasks like data entry. Increases in coordination between the TANF program and one-stop centers were also seen in the use of the one-stop center system to provide services to TANF clients. While the same number of states—24—reported in both 2000 and 2001 that services for the TANF work program were colocated at the majority of their one-stops, the use of electronic linkages or referrals increased. Fifteen states reported in 2001 that services for the TANF work program were either electronically linked to the majority of their one-stop centers or provided by referral between the two programs. In 2000, 11 states reported these types of linkages. About half of the states coordinated their TANF cash assistance or Food Stamps or Medicaid programs with the one-stop centers, electronically or by referral in 2000 and 2001. State officials in both Connecticut and New Jersey reported that even though one-stop staff did not determine eligibility for Medicaid and Food Stamps at the one-stops, the staff were expected to refer clients to appropriate support services outside one-stop centers. While not as prevalent as electronic linkages or referrals, colocation of cash assistance appeared to increase in 2001: 16 states reported that they provided cash assistance services at least part time at the majority of their one-stop centers, compared with 9 states in 2000. Colocation of Food Stamps and Medicaid remained the same: seven states reported in both years that they provided those services at least part time at the majority of one-stops. In general, the form of coordination between TANF and one-stops was different depending on the particular program services that were provided. For example, when the TANF work programs were being coordinated through the one-stop centers, services were more likely to be colocated. TANF cash assistance and the Food Stamps and Medicaid programs were more likely to be connected electronically or by referrals (see figure 3). Sometimes states instituted policies to further strengthen the relationships between the programs and ensure that clients were connected to one-stop services. In Michigan, for example, TANF clients were required to attend an orientation session at the one-stop before they could receive cash assistance. Similarly, in Connecticut, where there were low participation rates for TANF clients at one-stop centers, the legislature enacted a law requiring TANF clients to use one-stop centers as a condition of receiving cash assistance. In our site visits, we saw wide variation in the degree to which other support services, such as child care and transportation, were provided through the one-stop system. For child care assistance, the forms of coordination ranged from the colocation of child care programs at the one- stop to providing information on services available elsewhere. In New Jersey, for example, representatives from child care assistance programs were colocated at some of the one-stop centers, whereas in Arizona, coordination was limited to brochures supplied to one-stop centers. Many of the one-stops that we visited provided some kind of transportation assistance, although the nature of the services and whether or not the services were reserved for TANF clients varied from locality to locality. For example, in one location in New Jersey that we visited, the one-stop center reimbursed transportation expenses to any low-income client attending training, whether or not the client was covered under TANF. Another New Jersey one-stop provided van services to transport former TANF clients to and from job interviews and, once clients were employed, to and from their jobs, even during evening and night shifts. Similarly, a one-stop in Connecticut provided mileage reimbursement to current and former TANF clients for their expenses associated with going to and from their jobs. And in Louisiana, a one-stop we visited contracted with a nonprofit agency to provide van services to transport Welfare-to-Work grant recipients to and from work-related activities. Little is known about the relative success of TANF clients who use one- stop centers compared with those receiving services elsewhere, and state and local officials told us that decisions about how services were delivered were based on state and local preferences and conditions. Some state and local officials expressed a preference for colocating TANF programs at one-stop centers. For example, officials in a local area in Louisiana believed that colocation of TANF programs at the one-stop center would benefit TANF clients by exposing them to the one-stop center’s employer focus. These officials also said that colocation would result in a more seamless service delivery approach, giving clients easier access to the services. Other state and local officials preferred not to colocate all TANF- related programs. While they supported the colocation of TANF work programs, they thought that cash assistance, Food Stamps, or Medicaid should be provided elsewhere. For example, Michigan officials told us that keeping eligibility functions for TANF cash, Food Stamps and Medicaid separate was beneficial, because welfare staff had more expertise in the provision of social services while labor staff were better equipped to provide work-related services. Still other state and local officials were concerned about the colocation of any TANF-related programs, because TANF clients required special attention and were best served by staff trained to address their unique barriers. For example, in Arizona, TANF work programs were provided to TANF clients through a system that was not connected to one-stop centers. Rather than colocating or systematically referring welfare clients to one-stop centers, officials there said that one-stop staff should refer TANF clients to one-stop centers on a case-by-case basis. State officials in Washington reported that TANF clients need a higher level of supervision and more structured assistance than they believed one-stop centers could provide. Officials saw the one- stop centers as better structured to serve those clients whose participation was voluntary, whereas TANF clients are generally required to engage in work. Local conditions, such as geographically dispersed one-stop centers and low population density of TANF clients, also influenced state and local decisions about how to coordinate TANF-related programs with one-stop centers. For example, officials in Alabama reported that although welfare agencies were located in every county, one-stop centers were less prevalent in their state. They felt it was impractical to have TANF-related services colocated at one-stop centers, because one-stop centers would be inaccessible to many TANF clients. In addition, officials in Illinois said that they were hesitant to coordinate the provision of work-related services for TANF clients at one-stop centers in areas where the TANF population had recently declined. Because of declining TANF caseloads in Illinois, state officials stressed the importance of allowing local areas the flexibility to determine how to coordinate TANF-related services with one-stop centers. Conversely, other states were working to make one-stop centers more accessible to TANF clients. For example, both New Jersey and Louisiana established plans to create satellite one-stop centers in public housing areas. Because of the variation in local conditions, several state officials stressed the importance of local flexibility in determining the nature of coordination of TANF-related programs with one-stop centers. Despite increases in coordination between the TANF program and one- stops from 2000 to 2001, states and localities have continued to face challenges in coordinating their TANF work programs with one-stop centers. For some of the challenges, the existing flexibility under both TANF and WIA allowed states and localities to find solutions; and we found that some areas developed ways to resolve them. However, other challenges cannot be easily resolved at the local level. Most challenges are similar to those we reported in 2000 when WIA was first implemented. In general, the challenges result from state and local efforts to (1) develop the one-stop infrastructure that allows staff to readily provide needed services to TANF clients and (2) develop more compatible program definitions and requirements. Infrastructure limitations—in terms of both facilities and computer systems—continued to challenge states and localities in their efforts to coordinate TANF-related programs with one-stop centers. Colocation of TANF services within the one-stop was not a viable option in many of the locations that we visited. Officials in several states reported that available space at one-stop centers was limited and that the centers could not house additional programs or service providers. In addition, state officials explained that long-term leases or the use of state-owned buildings often prevented TANF work programs from relocating to one- stop centers. States developed ways to overcome these challenges to colocation in order to meet the needs of TANF clients. For example, Louisiana’s Department of Labor placed a Welfare-to-Work staff member in all local welfare offices. These staff members provided TANF clients with information about the services available at one-stop centers. In addition, one state assigned TANF staff to one-stop centers to serve TANF clients. The states that we visited reported that the inability to link the information systems of TANF work programs and one-stop centers complicated efforts to coordinate programs. A recent conference that we cosponsored also highlighted this issue, specifically identifying the age of information systems as inhibiting coordination efforts. The need to modernize the systems stemmed from the shift in objectives under TANF—focusing more on preparing TANF clients for work than had previous welfare programs— which created new demands on information systems; from the fact that systems used by agencies providing services to TANF clients did not share data on these clients, thus hindering the case management of clients; and from the antiquated information systems that made it difficult for agencies to take advantage of new technologies, such as Web-based technologies. Some of these concerns were also raised during our site visits and phone interviews. Some local officials said that they could not merge or share data and were not equipped to collect information on clients in different programs. TANF clients are often tracked separately from clients of other programs, and even Labor’s system, the One-Stop Operating System (OSOS), does not allow one-stop centers to include TANF programs. In addition, other officials expressed concerns that sharing data across programs would violate confidentiality restrictions. The issues of incompatible computer systems are not easily resolved. Officials from two states we visited said that their states’ WIA and TANF agencies were exploring the development of a shared system but that cost estimates were too high for it to be implemented at this time. As states and localities attempted to coordinate services for TANF clients through the one-stop, they encountered challenges to harmonizing program definitions and meeting reporting requirements. State officials noted that although the focuses of TANF work and WIA programs were related, differences in program definitions—such as what constitutes work or what income level constitutes self-sufficiency—made coordination difficult. While many program definitions are established by legislation and cannot be changed at the state or local level, a few can be locally determined, and two states found ways to harmonize their locally determined definitions. For example, Connecticut developed a self- sufficiency standard that could be uniformly applied across TANF and WIA, so that both programs would place clients in jobs with similar wage levels. One local one-stop center we visited in Arizona also worked to accommodate differences in program definitions. At this center, TANF and WIA officials worked together to develop training for both programs that enabled TANF clients to meet the requirement of a TANF work activity. As is the case with other programs in the one-stop centers, states and localities continue to struggle with the different reporting requirements attached to the various funding streams. Each program has restrictions on how its money can be used and what type of indicators it can use to measure success. Because the federal measures evaluate very different things, tracking performance for the TANF and WIA programs together was difficult. Despite the flexibility in TANF, state officials felt constrained by the need to meet federally required work participation rates, and they told us that they used these federal requirements to gauge how well their TANF work programs were performing. For example, one state official was concerned that the state TANF agency was focused more on meeting work participation rates than on designing programs that might help their TANF clients become self-sufficient. WIA, on the other hand, has a different set of performance measures geared toward client outcomes, including the degree to which clients’ earnings change over time and whether or not the clients stay employed. Many states and localities are organizing their WIA programs to maximize their ability to achieve these and other key client outcomes. These differences in program indicators often lead to very different program services for clients. Because of these differences, coordinating TANF work programs with the one-stop centers was difficult. These different reporting requirements may need either state or federal action to resolve.
The Workforce Investment Act (WIA) brought most federally funded employment and training services into a single, one-stop center system. Coordination between Temporary Assistance for Needy Families (TANF) programs and one-stop centers has increased since the act was implemented in 2000. Nearly all states reported some coordination at either the state or the local level. Most often, coordination took one of two forms: colocation, in which a client accesses TANF programs at the local one-stop, or referrals and electronic links to off-site programs. Despite progress, states and localities continue to report problems because of infrastructure limitations and varying program definitions and reporting requirements. Some of these challenges could be overcome through state and local innovation, but others will be resolved only through federal intervention. Early evidence suggests that states and localities are increasing their efforts to bring services together to fit local needs. As states and localities have begun to recognize the shared goals of the workforce and welfare systems, they have developed ways to coordinate services.
Limitations in the core contract personnel inventory hinder the ability to determine the extent to which the eight civilian IC elements used these personnel in 2010 and 2011 and to identify how this usage has changed over time. IC CHCO uses the inventory information in its statutorily- mandated annual personnel assessment to compare the current and projected number and costs of core contract personnel to the number and costs during the prior 5 years. IC CHCO reported that the number of core contract personnel full-time equivalents (FTEs) and their associated costs declined by nearly one-third from fiscal year 2009 to fiscal year 2011. However, we found a number of limitations with the inventory, including changes to the definition of core contract personnel, the elements’ use of inconsistent methodologies and a lack of documentation for calculating FTEs, and errors in reporting contract costs. On an individual basis, some of the limitations we identified may not raise significant concerns. When taken together, however, they undermine the utility of the information for determining and reporting on the extent to which the civilian IC elements use core contract personnel. Additionally, IC CHCO did not clearly explain the effect of the limitations when reporting the information to Congress. We identified several issues that limit the comparability, accuracy, and consistency of the information reported by the civilian IC elements as a whole including: Changes to the definition of core contract personnel. To address concerns that IC elements were interpreting the definition of core contract personnel differently and to improve the consistency of the information in the inventory, IC CHCO worked with the elements to develop a standard definition that was formalized with the issuance of Intelligence Community Directive (ICD) 612 in October 2009. Further, IC CHCO formed the IC Core Contract Personnel Inventory Control Board, which has representatives from all of the IC elements, to provide a forum to resolve differences in the interpretation of IC CHCO’s guidance for the inventory. As a result of the board’s efforts, IC CHCO provided supplemental guidance in fiscal year 2010 to either include or exclude certain contract personnel, such as those performing administrative support, training support, and information technology services. While these changes were made to—and could improve—the inventory data, it is unclear the extent to which the definitional changes contributed to the reported decrease in the number of core contract personnel and their associated costs from year to year. For example, for fiscal year 2010, officials from one civilian IC element told us they stopped reporting information technology help desk contractors, which had been previously reported, to be consistent with IC CHCO’s revised definition. One of these officials stated consequently that the element’s reported reduction in core contract personnel between fiscal years 2009 and 2010 did not reflect an actual change in their use of core contract personnel, but rather a change in how core contract personnel were defined for the purposes of reporting to IC CHCO. However, IC CHCO included this civilian IC element’s data when calculating the IC’s overall reduction in number of core contract personnel between fiscal years 2009 and 2011 in its briefing to Congress and the personnel level assessment. IC CHCO explained in both documents that this civilian IC element’s rebaselining had an effect on the element’s reported number of contractor personnel for fiscal year 2010 but did not explain how this would limit the comparability of the number and costs of core contract personnel for both this civilian IC element and the IC as a whole. Inconsistent methodologies for determining FTEs. The eight civilian IC elements used significantly different methodologies when determining the number of FTEs. For example, some civilian IC elements estimated contract personnel FTEs using target labor hours while other civilian IC elements calculated the number of FTEs using the labor hours invoiced by the contractor. As a result, the reported numbers were not comparable across these elements. The IC CHCO core contract personnel inventory guidance for both fiscal years 2010 and 2011 did not specify appropriate methodologies for calculating FTEs, require IC elements to describe their methodologies, or require IC elements to disclose any associated limitations with their methodologies. Depending on the methodology used, an element could calculate a different number of FTEs for the same contract. For example, for one contract we reviewed at a civilian IC element that reports FTEs based on actual labor hours invoiced by the contractor, the element reported 16 FTEs for the contract. For the same contract, however, a civilian IC element that uses estimated labor hours at the time of award would have calculated 27 FTEs. IC CHCO officials stated they had discussed standardizing the methodology for calculating the number of FTEs with the IC elements but identified challenges, such as identifying a standard labor-hour conversion factor for one FTE. IC CHCO guidance for fiscal year 2012 instructed elements to provide the total number of direct labor hours worked by the contract personnel to calculate the number of FTEs for each contract, as opposed to allowing for estimates, which could improve the consistency of the FTE information reported across the IC. Lack of documentation for calculating FTEs. Most of the civilian IC elements did not maintain readily available documentation of the information used to calculate the number of FTEs reported for a significant number of the records we reviewed. As a result, these elements could not easily replicate the process for calculating or validate the reliability of the information reported for these records. Federal internal control standards call for appropriate documentation to help ensure the reliability of the information reported. For 37 percent of the 287 records we reviewed, however, we could not determine the reliability of the information reported. Inaccurately determined contract costs. We could not reliably determine the costs associated with core contract personnel, in part because our analysis identified numerous discrepancies between the amount of obligations reported by the civilian IC elements in the inventory and these elements’ supporting documentation for the records we reviewed. For example, we found that the civilian IC elements either under- or over-reported the amount of contract obligations by more than 10 percent for approximately one-fifth of the 287 records we reviewed. Further, the IC elements could not provide complete documentation to validate the amount of reported obligations for another 17 percent of the records we reviewed. Civilian IC elements cited a number of factors that may account for the discrepancies, including the need to manually enter obligations for certain contracts or manually delete duplicate contracts. Officials from one civilian IC element noted that a new contract management system was used for reporting obligations in the fiscal year 2011 inventory, which offered greater detail and improved functionality for identifying obligations on their contracts; however, we still identified discrepancies in 18 percent of this element’s reported obligations in fiscal year 2011 for the records in our sample. In our January 2014 report, we recommended that IC CHCO clearly specify limitations, significant methodological changes, and their associated effects when reporting on the IC’s use of core contract personnel. We also recommended that IC CHCO develop a plan to enhance internal controls for compiling the core contract personnel inventory. IC CHCO agreed with these recommendations and described steps it was taking to address them. Specifically, IC CHCO stated it will highlight all adjustments to the data over time and the implications of those adjustments in future briefings to Congress and OMB. In addition, IC CHCO stated it has added requirements for the IC elements to include the methodologies used to identify and determine the number of core contract personnel and their steps for ensuring the accuracy and completeness of the data. The civilian IC elements have used core contract personnel to perform a range of functions, including human capital, information technology, program management, administration, collection and operations, and security services, among others. However, the aforementioned limitations we identified in the obligation and FTE data precluded us from using the information on contractor functions to determine the number of personnel and their costs associated with each function category. Further, the civilian IC elements could not provide documentation for 40 percent of the contracts we reviewed to support the reasons they cited for using core contract personnel. As part of the core contract personnel inventory, IC CHCO collects information from the elements on contractor-performed functions using the primary contractor occupation and competency expertise data field. An IC CHCO official explained that this data field should reflect the tasks performed by the contract personnel. IC CHCO’s guidance for this data field instructs the IC elements to select one option from a list of over 20 broad categories of functions for each contract entry in the inventory. Based on our review of relevant contract documents, such as statements of work, we were able to verify the categories of functions performed for almost all of the contracts we reviewed, but we could not determine the extent to which civilian IC elements contracted for these functions. For example, we were able to verify for one civilian IC element’s contract that contract personnel performed functions within the systems engineering category, but we could not determine the number of personnel dedicated to that function because of unreliable obligation and FTE data. Further, the IC elements often lacked documentation to support why they used core contract personnel. In preparing their inventory submissions, IC elements can select one of eight options for why they needed to use contract personnel, including the need to provide surge support for a particular IC mission area, insufficient staffing resources, or to provide unique technical, professional, managerial, or intellectual expertise to the IC element that is not otherwise available from U.S. governmental civilian or military personnel. However, for 81 of the 102 records in our sample coded as unique expertise, we did not find evidence in the statements of work or other contract documents that the functions performed by the contractors required expertise not otherwise available from U.S. government civilian or military personnel. For example, contracts from one civilian IC element coded as unique expertise included services for conducting workshops and analysis, producing financial statements, and providing program management. Overall, the civilian IC elements could not provide documentation for 40 percent of the 287 records we reviewed. As previously noted, in our January 2014 report, we recommended that IC CHCO develop a plan to enhance internal controls for compiling the core contract personnel inventory. CIA, ODNI, and the executive departments that are responsible for developing policies to address risks related to contractors for the six civilian IC elements within those departments have generally made limited progress in developing such policies. Further, the eight civilian IC elements have generally not developed strategic workforce plans that address contractor use and may be missing opportunities to leverage the inventory as a tool for conducting strategic workforce planning and for prioritizing contracts that may require increased management attention and oversight. By way of background, federal acquisition regulations provide that as a matter of policy certain functions government agencies perform, such as determining agency policy, are inherently governmental and must be performed by federal employees. In some cases, contractors perform functions closely associated with the performance of inherently governmental functions. For example, contractors performing certain intelligence analysis activities may closely support inherently governmental functions. For more than 20 years, OMB procurement policy has indicated that agencies should provide a greater degree of scrutiny when contracting for services that closely support inherently governmental functions. The policy directs agencies to ensure that they maintain sufficient government expertise to manage the contracted work. The Federal Acquisition Regulation also addresses the importance of management oversight associated with contractors providing services that have the potential to influence the authority, accountability, and responsibilities of government employees. Our prior work has examined reliance on contractors and the mitigation of related risks at the Department of Defense, Department of Homeland Security, and several other civilian agencies and found that they generally did not fully consider and mitigate risks of acquiring services that may inform government decisions. Within the IC, core contract personnel perform the types of functions that may affect an IC element’s decision-making authority or control of its mission and operations. While core contract personnel may perform functions that closely support inherently governmental work, these personnel are generally prohibited from performing inherently governmental functions. Figure 1 illustrates how the risk of contractors influencing government decision making is increased as core contract personnel perform functions that closely support inherently governmental functions. More recently, OFPP’s September 2011 Policy Letter 11-01 builds on past federal policies by including a detailed checklist of responsibilities that must be carried out when agencies rely on contractors to perform services that closely support inherently governmental functions. The policy letter requires executive branch departments and agencies to develop and maintain internal procedures to address the requirements of the guidance. OFPP, however, did not establish a deadline for when agencies need to complete these procedures. In 2011, when we reviewed civilian agencies’ efforts in managing service contracts, we concluded that a deadline may help better focus agency efforts to address risks and therefore recommended that OFPP establish a near-term deadline for agencies to develop internal procedures, including for services that closely support inherently governmental functions. OFPP generally concurred with our recommendation and commented that it would likely establish time frames for agencies to develop the required internal procedures, but it has not yet done so. In our January 2014 report, we found that CIA, ODNI, and the departments of the other civilian IC elements had not fully developed policies that address risks associated with contractors closely supporting inherently governmental functions. DHS and State had issued policies and guidance that addressed generally all of OFPP Policy Letter 11-01’s requirements related to contracting for services that closely support inherently governmental functions. However, the Departments of Justice, Energy, and Treasury; CIA; and ODNI were in various stages of developing required internal policies to address the policy letter. Civilian IC element and department officials cited various reasons for not yet developing policies to address all of the OFPP policy letter’s requirements. For example, Treasury officials stated that the OFPP policy letter called for dramatic changes in agency procedures and thus elected to conduct a number of pilots before making policy changes. We also found that decisions to use contractors were not guided by strategies on the appropriate mix of government and contract personnel. OMB’s July 2009 memorandum on managing the multisector workforce and our prior work on best practices in strategic human capital management have indicated that agencies’ strategic workforce plans should address the extent to which it is appropriate to use contractors. Specifically, agencies should identify the appropriate mix of government and contract personnel on a function-by-function basis, especially for critical functions, which are functions that are necessary to the agency to effectively perform and maintain control of its mission and operations. The OMB guidance requires an agency to have sufficient internal capability to control its mission and operations when contracting for these critical functions. While IC CHCO requires IC elements to conduct strategic workforce planning, it does not require the elements to determine the appropriate mix of personnel either generally or on a function-by-function basis. ICD 612 directs IC elements to determine, review, and evaluate the number and uses of core contract personnel when conducting strategic workforce planning but does not reference the requirements related to determining the appropriate workforce mix specified in OMB’s July 2009 memorandum or require elements to document the extent to which contractors should be used. As we reported in January 2014, the civilian IC elements’ strategic workforce plans generally did not address the extent to which it is appropriate to use contractors, either in general or more specifically to perform critical functions. For example, ODNI’s 2012- 2017 strategic human capital plan outlines the current mix of government and contract personnel by five broad function types: core mission, enablers, leadership, oversight, and other. The plan, however, does not elaborate on what the appropriate mix of government and contract personnel should be on a function-by-function basis. In August 2013, ODNI officials informed us they are continuing to develop documentation to address a workforce plan. Lastly, the civilian IC elements’ ability to use the inventory for strategic planning is hindered by limited information on contractor functions. OFPP’s November 2010 memorandum on service contract inventories indicates that a service contract inventory is a tool that can assist an agency in conducting strategic workforce planning. Specifically, an agency can gain insight into the extent to which contractors are being used to perform specific services by analyzing how contracted resources, such as contract obligations and FTEs, are distributed by function across an agency. The memorandum further indicates that this insight is especially important for contracts whose performance may involve critical functions or functions closely associated with inherently governmental functions. When we met with OFPP officials during the course of our work, they stated that the IC’s core contract personnel inventory serves this purpose for the IC and, to some extent, follows the intent of the service contract inventories guidance to help mitigate risks. OFPP officials stated that IC elements are not required to submit separate service contract inventories that are required of the civilian agencies and DOD, in part because of the classified nature of some of the contracts. The core contract personnel inventory, however, does not provide the civilian IC elements with detailed insight into the functions their contractors are performing or the extent to which contractors are used to perform functions that are either critical to support their missions or closely support inherently governmental work. For example, based on the contract documents we reviewed, we identified at least 128 instances in the 287 records we reviewed in which the functions reported in the inventory data did not reflect the full range of services listed in the contracts. In our January 2014 report, we concluded that without complete and accurate information in the core contract personnel inventory on the extent to which contractors are performing specific functions, the civilian IC elements may be missing an opportunity to leverage the inventory as a tool for conducting strategic workforce planning and for prioritizing contracts that may require increased management attention and oversight. In our January 2014 report, we recommended that the Departments of Justice, Energy, and Treasury; CIA; and ODNI set time frames for developing guidance that would fully address OFPP Policy Letter 11-01’s requirements related to closely supporting inherently governmental functions. The agencies are in various stages of responding to our recommendation. For example, Treasury indicated plans to issue guidance by the end of fiscal year 2014. DOJ agreed with our recommendation, and we will continue to follow up with them on their planned actions. CIA, DOE, and ODNI have not commented on our recommendation, and we will continue to follow up with them to identify what actions, if any, they are taking to address our recommendation. To improve the ability of the civilian IC elements to strategically plan for their contractors and mitigate associated risks, we also recommended that IC CHCO revise ICD 612 to require IC elements to identify their assessment of the appropriate workforce mix on a function-by-function basis, assess how the core contract personnel inventory could be modified to provide better insights into the functions performed by contractors, and require the IC elements to identify contracts within the inventory that include services that are critical or closely support inherently governmental functions. IC CHCO generally agreed with these recommendations and indicated it would explore ways to address the recommendations. In conclusion, IC CHCO and the civilian IC elements recognize that they rely on contractors to perform functions essential to meeting their missions. To effectively leverage the skills and capabilities that contractors provide while managing the government’s risk, however, requires agencies to have the policies, tools, and data in place to make informed decisions. OMB and OFPP guidance issued over the past several years provide a framework to help assure that agencies appropriately identify, manage and oversee contractors supporting inherently governmental functions, but we found that CIA, ODNI, and several of the departments in our review still need to develop guidance to fully implement them. Similarly, the core contract personnel inventory can be one of those tools that help inform strategic workforce decisions, but at this point the inventory has a number of data limitations that undermines its utility. IC CHCO has recognized these limitations and, in conjunction with the IC elements, has already taken some actions to improve the inventory’s reliability and has committed to doing more. Collectively, incorporating needed changes into agency guidance and improving the inventory’s data and utility, as we recommended, should better position the IC CHCO and the civilian IC elements to make more informed decisions. Chairman Carper, Ranking Member Coburn, and Members of the Committee, this concludes my prepared remarks. I would be happy to answer any questions that you may have. For questions about this statement, please contact Timothy DiNapoli at (202) 512-4841, or at dinapolit@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Molly W. Traci, Assistant Director; Claire Li; and Kenneth E. Patton. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The IC uses core contract personnel to augment its workforce. These contractors typically work alongside government personnel and perform staff-like work. Some core contract personnel require enhanced oversight because they perform services that could significantly influence the government's decision making. In September 2013, GAO issued a classified report that addressed (1) the extent to which the eight civilian IC elements use core contract personnel, (2) the functions performed by these personnel and the reasons for their use, and (3) whether the elements developed policies and strategically planned for their use. GAO reviewed and assessed the reliability of the elements' core contract personnel inventory data for fiscal years 2010 and 2011, including reviewing a nongeneralizable sample of 287 contract records. GAO also reviewed agency acquisition policies and workforce plans and interviewed agency officials. In January 2014, GAO issued an unclassified version of the September 2013 report, GAO-14-204 . This statement is based on the information in the unclassified GAO report. Limitations in the intelligence community's (IC) inventory of contract personnel hinder the ability to determine the extent to which the eight civilian IC elements—the Central Intelligence Agency (CIA), Office of the Director of National Intelligence (ODNI), and six components within the Departments of Energy, Homeland Security, Justice, State, and the Treasury—use these personnel. The IC Chief Human Capital Officer (CHCO) conducts an annual inventory of core contract personnel that includes information on the number and costs of these personnel. However, GAO identified a number of limitations in the inventory that collectively limit the comparability, accuracy, and consistency of the information reported by the civilian IC elements as a whole. For example, changes to the definition of core contract personnel limit the comparability of the information over time. In addition, the civilian IC elements used various methods to calculate the number of contract personnel and did not maintain documentation to validate the number of personnel reported for 37 percent of the records GAO reviewed. GAO also found that the civilian IC elements either under- or over-reported the amount of contract obligations by more than 10 percent for approximately one-fifth of the records GAO reviewed. Further, IC CHCO did not fully disclose the effects of such limitations when reporting contract personnel and cost information to Congress, which limits its transparency and usefulness. The civilian IC elements used core contract personnel to perform a range of functions, such as information technology and program management, and reported in the core contract personnel inventory on the reasons for using these personnel. However, limitations in the information on the number and cost of core contract personnel preclude the information on contractor functions from being used to determine the number of personnel and their costs associated with each function. Further, civilian IC elements reported in the inventory a number of reasons for using core contract personnel, such as the need for unique expertise, but GAO found that 40 percent of the contract records reviewed did not contain evidence to support the reasons reported. Collectively, CIA, ODNI, and the departments responsible for developing policies to address risks related to contractors for the other six civilian IC elements have made limited progress in developing those policies, and the civilian IC elements have generally not developed strategic workforce plans that address contractor use. Only the Departments of Homeland Security and State have issued policies that generally address all of the Office of Federal Procurement Policy's requirements related to contracting for services that could affect the government's decision-making authority. In addition, IC CHCO requires the elements to conduct strategic workforce planning but does not require the elements to determine the appropriate mix of government and contract personnel. Further, the inventory does not provide insight into the functions performed by contractors, in particular those that could inappropriately influence the government's control over its decisions. Without complete and accurate information in the inventory on the extent to which contractors are performing specific functions, the elements may be missing an opportunity to leverage the inventory as a tool for conducting strategic workforce planning and for prioritizing contracts that may require increased management attention and oversight. In the January 2014 report, GAO recommended that IC CHCO take several actions to improve the inventory data's reliability, revise strategic workforce planning guidance, and develop ways to identify contracts for services that could affect the government's decision-making authority. IC CHCO generally agreed with GAO's recommendations.
Combined Medicare and Medicaid payments to nursing homes for care provided to vulnerable elderly and disabled beneficiaries totaled about $64 billion in 2002, with total federal payments of approximately $45.5 billion. Oversight of nursing home quality is a shared federal-state responsibility. On the basis of statutory requirements, CMS defines standards that nursing homes must meet to participate in the Medicare and Medicaid programs, and contracts with states to assess, through annual surveys and complaint investigations, whether homes meet these standards. CMS is also responsible for monitoring the adequacy of state survey activities. Arkansas’s unique 1999 law requires investigations by county officials, such as coroners, of nursing home residents’ deaths and referral of any cases of suspected neglect to the state survey agency and the MFCU. Every nursing home receiving Medicare or Medicaid payments must undergo an unannounced standard survey not less than once every 15 months, and the statewide average interval for these surveys must not exceed 12 months. A standard survey entails a team of state surveyors, including registered nurses, spending several days in the nursing home to assess compliance with federal long-term care facility requirements, particularly whether care and services provided meet the assessed needs of the residents and whether the home is providing adequate quality of care, such as preventing avoidable pressure sores, weight loss, or accidents. State surveyors assess the quality of care provided to a sample of residents during the standard survey, which is the basis for evaluating nursing homes’ compliance with federal requirements. CMS establishes specific investigative protocols for state surveyors to use in conducting these comprehensive surveys. These procedural instructions are intended to make the on-site surveys thorough and consistent across states. When a deficiency is identified, the nursing home is required to prepare a plan of correction that must be approved by the state survey agency. Our earlier work indicated that facilities could mask certain deficiencies, such as routinely having too few staff to care for residents, if they could predict the survey timing; CMS therefore directed states, effective in 1999, to (1) avoid scheduling a home’s survey for the same month of the year as the home’s previous standard survey and (2) begin at least 10 percent of standard surveys outside the normal workday (either on weekends, early in the morning, or late in the evening). Complaint investigations provide an opportunity for state surveyors to intervene promptly if quality-of-care problems arise between standard surveys. A nursing home resident, family member, friend, nursing home employee, or others may file complaints. CMS requires the investigation of complaints that represent immediate jeopardy to resident health and safety within 2 working days and considers such complaints to be those where one or more of the conditions alleged in the complaint, if true, may have caused or is likely to cause serious injury, harm, impairment, or death to a resident. Beginning in 1999, CMS required investigation of complaints that allege harm to a resident (but which do not rise to the level of immediate jeopardy) within 10 working days, but did not provide detailed guidance to the states about what constitutes harm until November 2003. In November 2003 guidance, CMS generally defined two categories of complaints representing harm: (1) those that, if true, would impair the resident’s mental, physical, and/or psychosocial status, which must be investigated within 10 working days, and (2) those that would not significantly impair the resident’s mental, physical, and/or psychosocial status, which must be investigated within 45 calendar days. Other complaints that do not rise to the level of either immediate jeopardy or harm do not have to be investigated until the home’s next survey, or in some cases, not at all if the state survey agency can determine with certainty that no investigation, analysis, or action is necessary. The requirements identified in the November 2003 guidance became effective on January 1, 2004. Generally, nurse surveyors investigate complaints onsite at the nursing home by reviewing medical records and interviewing staff and residents. The investigations typically include a sample of residents in addition to the resident who is the subject of the complaint to help determine if the problems are systemic. Depending on the volume of complaints against a particular home, several complaints for different residents may be investigated concurrently. Each complaint may contain one or more allegations that a facility is violating federal quality-of-care standards. For example, a single complaint could allege problems with resident abuse, treatment of pressure sores, and proper feeding and hydration. In the course of complaint investigations, the state survey agency can either substantiate or not substantiate the specific allegations or discover other, unreported violations of federal standards (see table 1). A substantiated complaint, however, does not necessarily mean that the state survey agency found neglect of the resident who was the subject of the complaint but rather may indicate other, unrelated care problems. If the state survey agency finds a current violation of a federal standard during a complaint investigation—even if the violation does not relate to the specific allegations being investigated or the residents who are the subject of the complaint—it is required to cite a deficiency against the home. If a complaint investigation reveals no current violation of federal standards but determines that an egregious violation of federal standards occurred in the past that was not identified during earlier surveys, a deficiency known as past noncompliance should be cited and a civil monetary penalty imposed. CMS does not define egregious but indicates that it includes noncompliance related to a resident’s death. Quality-of-care deficiencies identified during either standard surveys or complaint investigations are classified in 1 of 12 categories according to their scope (i.e., the number of residents potentially or actually affected) and their severity. An A-level deficiency is the least serious and is isolated in scope, while an L-level deficiency is the most serious and is considered to be widespread in the nursing home (see table 2). States are required to enter information about surveys and complaint investigations, including the scope and severity of deficiencies identified, in CMS’s OSCAR database. Since 1998, such information has been available to the public through CMS’s Nursing Home Compare Web site. CMS is responsible for overseeing each state survey agency’s performance in ensuring nursing homes’ compliance with federal standards for quality of care. Its primary oversight tools are statutorily required federal monitoring surveys conducted annually in at least 5 percent of Medicare and Medicaid nursing homes surveyed by each state, on-site annual state performance reviews instituted during fiscal year 2001, and analysis of periodic oversight reports that have been produced since 2000. Federal monitoring surveys can be either comparative or observational. A comparative survey involves a federal survey team conducting a complete, independent survey of a home within 2 months of the completion of a state’s survey in order to compare and contrast the findings. In an observational survey, one or more federal surveyors accompany a state survey team to a nursing home to observe the team’s performance. Roughly 81 percent of federal surveys conducted in fiscal year 2003 were observational. State performance reviews, implemented in October 2000, measure state performance against seven standards, including statutory requirements on survey frequency, requirements for documenting deficiencies, timeliness of complaint investigations, and timely and accurate entry of deficiencies into OSCAR. These reviews replaced state self-reporting of their compliance with federal requirements. In October 2000, CMS also began to produce 19 periodic reports to monitor both state and regional office performance. The reports are based on OSCAR and other CMS databases. Examples of reports that track state activities include pending nursing home terminations (weekly); data entry timeliness (quarterly); tallies of state surveys that find homes deficiency- free (semiannually); and analyses, by state, of the most frequently cited deficiencies (annually). These reports, in a standard format, enable comparisons within and across states and regions and are intended to help identify problems and the need for intervention. Certain reports—such as the timeliness of state survey activities—are used to monitor compliance with state performance standards. In July 1999, Arkansas enacted a law requiring nursing homes to immediately report the deaths of residents to the local coroner, regardless of the cause of death. The law included a similar reporting requirement for a hospital when a resident died within 5 days after transferring from a nursing home. Coroners who find reasonable cause to suspect that the death is due to maltreatment are directed to report their findings to the state Department of Human Services and to law enforcement and the appropriate prosecuting attorney. The statute leaves the scope of the investigation up to each coroner. Like most states, Arkansas already required unnatural deaths to be reported to the coroner for investigation before enactment of the 1999 law. According to a coroner who was instrumental in demonstrating the need for the legislation, nursing home administrators chose to release decedents to funeral homes despite the existing requirement for a coroner investigation of deaths that occurred under suspicious circumstances. From 1994 to 1998, this coroner’s office conducted six exhumations of nursing home residents and, after full postmortem examinations, all six were determined to have died unnatural deaths. Two cases were ruled medication errors and four were deaths caused by suffocation. For example, one resident was found to have suffocated while tied to his nursing home bed, but the home never reported the death to the coroner. The Arkansas state survey agency, an entity within the Department of Human Services, and the MFCU, an organization within Arkansas’s Office of the Attorney General, receive and investigate coroner referrals. Referrals also may be sent to a local city or county prosecutor. The Arkansas state survey agency treats referrals of suspected neglect of nursing home residents as complaints. As with other complaints, they are prioritized for investigation on the basis of the seriousness of the allegations. Arkansas, like other states, has additional categories with longer investigation time frames (45 days and next survey) for complaints judged to be less serious than immediate jeopardy (2 working days) and actual harm (10 working days). Complaint allegations are entered on an intake form that also includes the source of the complaint and eventually the outcome of the investigation. To document their actions, Arkansas surveyors generally prepare a one-to two-page summary specifically describing how the complaint was investigated and which specific allegations were or were not substantiated. Typically, the individual who filed the complaint is informed about the results of the complaint investigation. The Arkansas state survey agency uses a computerized system to track the status of complaint investigations. In Arkansas, the MFCU’s authority to investigate resident abuse and neglect is limited to nursing homes that receive Medicaid reimbursement; therefore, it cannot investigate such allegations in a nursing home that only participates in Medicare or that only accepts private pay patients. Generally, MFCUs have concurrent jurisdiction with local investigative and prosecutorial authorities and can both investigate and prosecute such cases statewide. On the basis of an investigation, a MFCU can initiate criminal actions in state court but must first obtain permission from the local prosecutor. In such cases, the focus is not on whether a home is providing appropriate care but rather on whether the MFCU can substantiate in court that an act of neglect occurred. These cases may be settled out of court by a payment to the state’s Medicaid program without an admission of guilt. Of the approximately 4,000 nursing home deaths investigated by the Pulaski County coroner from July 1999 through December 2003, the coroner informed us that he identified and referred 86 cases (2.2 percent) of suspected resident neglect to the state survey agency and the MFCU. Even when measured against the number of complaints filed against nursing homes and abuse and neglect case referrals to the MFCU, the number of coroner referrals was very small. However, the coroner’s referrals, many accompanied by photos, often depicted signs of serious, avoidable care problems. According to the Pulaski County coroner, his staff generally arrives at the nursing home or hospital within 15 to 20 minutes after the notification, which is expected to be immediate, of a resident’s death. Facilities have been instructed not to disturb the resident’s body. The initial on-site investigation consists of (1) a physical examination of the body, which is photographed; (2) interviews with the treating physician, staff, and perhaps family members; and (3) a review of the decedent’s medical records, including a comparison of doctors’ prescriptions and nurses’ notes to ensure that medications were properly administered. During the investigation, the coroner’s staff looks for several key indicators of whether a decedent may have received poor care, including significant weight loss; dehydration; pressure sores; undocumented injuries, such as bruises or skin tears; and interviews with family members. Many of these care indicators are similar to those examined during the state survey agency’s annual inspection of every nursing home. Before releasing the body to a funeral home, the coroner may order a toxicology report or ask the state medical examiner to conduct an autopsy to determine whether care problems, such as a medication error or blood poisoning (sepsis) from infected pressure sores, contributed to the resident’s death. Of the 86 residents referred by the coroner to the state survey agency and the MFCU, 14 had autopsies completed. Pressure sores, typically serious and often numerous, were the predominant indication of care problems identified in 67 percent of the coroner’s referrals (see fig. 1). Pressure sores are caused by unrelieved pressure on the skin that squeezes the tiny blood vessels supplying the skin with nutrients and oxygen, causing the skin and ultimately, underlying tissue to die. Most pressure sores can be prevented with adequate nutrition, sanitation and frequent repositioning of the resident. In some of the coroner’s photos, bone or ligament was visible, as were signs of infection or dead tissue, indicating a serious stage IV pressure sore (see table 3). Other indications of care problems identified by the coroner included bruises, abrasions, and skin tears (12 percent) and falls or broken bones (6 percent). For one referral, the bruise covered the decedent’s entire upper chest and for another the arm from the elbow to the shoulder. In about 15 percent of referrals, the indications of care problems identified by the coroner were difficult to categorize, such as a decedent with a catheter whose penis was bloody and irritated, a resident who died when he attempted to burn off his restraints with a cigarette lighter, and a resident who was taken to the hospital with breathing problems. An autopsy of the last resident revealed the presence of toxic or excessive levels of drugs that likely caused the respiratory problems and contributed to the development of pneumonia and to death. For some referrals, the coroner found evidence of multiple care problems. For example, a 1999 referral involved a decedent with a 9-square inch pressure sore on her lower back, a gangrenous foot, and ants on her feeding tube and wounds. According to the resident’s daughter, the odor in her mother’s room at the nursing home was so great that she had to leave. The autopsy attributed the gangrene to arteriosclerosis that restricted the blood supply to her legs but also found that the resident suffocated when dried mucus that had accumulated in her mouth broke off and blocked her breathing passage. According to the MFCU, her wounds and oral care appeared to have been neglected for some time. The 86 cases of suspected resident neglect occurred in 27 nursing homes. Although it is difficult to precisely identify the proportion of Pulaski County nursing homes that had referrals because facilities closed and opened during the time period we examined, over half of the 27 homes had three or more referrals (see fig. 2). Fourteen homes accounted for almost 80 percent of the referrals. Some homes had a pattern of referrals spanning several years. For example, one home had seven referrals—one in 1999, two in 2000, two in 2001, and another two in 2002. Three of these seven referrals involved stage IV pressure sores, some of which were blackened with dead tissue, and one referral involved a resident who died because of an overdose of drugs administered by the nursing home. Nineteen of the 27 nursing homes were referred by the Pulaski County coroner, many of them more than once, because the deceased residents had pressure sores (see app. I). Eleven of the 12 referrals for one home involved pressure sores. The standard surveys of these homes, however, infrequently raised concerns about the care provided to prevent and treat pressure sores. As of November 2003, 15 of the 19 homes had not been cited on any of the previous four standard surveys for a pressure sore deficiency at the actual harm level or higher, while 3 homes each had one such deficiency. According to Arkansas state survey agency officials, the agency received 36 coroner referrals of suspected resident neglect, less than half of the 86 referrals the coroner said he made. The agency’s investigations of these coroner referrals often understated serious care problems—both when neglect was substantiated and not substantiated (see app. II). Even in the majority of substantiated referrals, the state survey agency failed to cite serious deficiencies involving care problems for the decedents who were the subject of the referrals, in effect not confirming the predominant care problems identified by the coroner. The MFCU’s investigations of many of these same referrals, however, frequently found that facilities had been negligent in caring for the decedents by identifying serious lapses in care. In half of the referrals not substantiated by the state survey agency, either the MFCU investigation found neglect or we questioned the basis for the “not substantiated” findings, and our concerns were confirmed by a professor of nursing with expertise in long-term care. Moreover, the MFCU found inconsistencies in the medical records for some decedents, raising a question about the state survey agency’s conclusion that the same records indicated care had been provided. Although the Pulaski County coroner told us that he had referred 86 cases of suspected resident neglect from July 1999 through December 2003, Arkansas state survey agency officials said that they received fewer than half (see table 4) and investigated all but one of the referrals they received. MFCU officials, however, indicated that they received almost three-fifths of the 86 referrals. The MFCU received all but three of the referrals received by the state survey agency. Overall, 32 coroner referrals were not investigated by either agency. According to the coroner, all the referrals were hand delivered rather than mailed to ensure that none were lost, but officials at the state survey agency and the MFCU told us that they did not know how referrals were delivered. We found inconsistencies in agency and MFCU recordkeeping. For example, the state survey agency told us that it had received five referrals on the coroner’s list but could not provide a copy of any complaint intake forms for them or the results of its investigations for three of the five referrals. While a MFCU official told us that three other referrals were forwarded to it by the state survey agency, not the coroner, the state survey agency had no record of these referrals. The 50 coroner referrals not received by the state survey agency were similar to those received. For example, one decedent had large, unexplained bruises on her chest, upper right arm, and back, including a mass of more than nine square inches that likely consisted of clotted blood from a broken blood vessel. A second decedent had five pressure sores— lower leg, heel, lower back, and both hips; according to the coroner’s report, one of the pressure sores was “draining a dark-colored, pus-filled, and foul-smelling fluid.” The decedent’s medical records indicated admission to the nursing home 6 months before death without any pressure sores. A third decedent had 10 pressure sores with dead tissue on one heel. A fourth decedent had a large tear on the upper arm, a pressure sore on one foot with dead tissue extending to mid-calf, and a stage IV pressure sore on one buttock. Three coroner referrals not received by the state survey agency but investigated by the MFCU found negligent care that resulted in settlements and payments by the facilities. With the exception of one home, we found that state survey agency complaint investigations of coroner referrals often failed to cite serious deficiencies for the decedents being investigated, even though over half of the referrals investigated were substantiated. Overall, the state survey agency substantiated 22 of the 36 coroner referrals it investigated at 12 nursing homes. However, the state survey agency cited actual harm or higher-level deficiencies in quality of care, abuse/neglect, or both for only 11 of these 22 substantiated referrals (see table 5). Nursing home A accounted for 6 of 11 citations for neglect of decedents at the actual harm or higher level (see table 5). The neglect involved inadequate care to prevent and treat pressure sores. The home was terminated from participation in Medicare and Medicaid in November 2000, about 5 months after the first of a series of state survey agency complaint investigations initiated as a result of coroner referrals. Although the agency found that six of the coroner-referred decedents had been neglected by home A, the results of this home’s March 2000 standard survey and the timing and results of some complaint investigations prior to its closure were inconsistent with those findings. We identified the following inconsistencies in surveys of this home: The home’s March 3, 2000, standard survey found no deficiencies other than a C-level deficiency (potential for minimal harm) for inadequate housekeeping and maintenance, including a water-damaged ceiling tile, soiled carpeting, and worn upholstery on a sofa. The survey’s resident sample, however, included a resident who died in mid-April, less than 6 weeks after the standard survey, with five stage IV pressure sores. Even though the photos accompanying coroner referrals for four decedents suggested serious, systemic care problems, the state survey agency did not initiate a complaint investigation until May 16, 2000, about 3 weeks after receiving the referrals, which were all sent at the same time. CMS guidance requires that such complaints be investigated within 2 to 10 days, but state survey agency officials told us that they often gave a higher priority to investigating serious complaints for living residents. The state survey agency cited actual harm deficiencies for quality of care for three of the four decedents because similar care problems were found for current residents at the facility. The May 16 investigation, however, included March 27 and April 3 complaints from family members of one resident alleging that he (1) had deteriorating, unbandaged pressure sores and (2) was left wet and soiled for long periods, a situation that could have contributed to worsening pressure sores. These allegations went uninvestigated for almost 2 months until they were confirmed in May. Investigation of a subsequent July complaint for this resident documented further deterioration of the pressure sores that began on his buttocks and extended all the way up his back. Although this same resident was included in the sample of a subsequent September 2000 complaint investigation, his continuing pressure sores were not cited during that investigation. A final complaint investigation at the home about 6 weeks later—following the resident’s death—found that he had 28 pressure sores when he died; 7 of the pressures sores, 2 of which were stage IV, did not have a physician’s order for treatment. Only five of the referrals for decedents at other homes resulted in the citation of a deficiency at the actual harm or higher level for the decedent in question (see table 5). The deficiencies cited involved quality of care or abuse/neglect for four of the five decedents. For one of the five decedents, who had numerous, serious pressure sores, no current violations of federal standards were identified during the investigation of the coroner’s referral. Under CMS guidance, surveyors would need to identify a current resident with inadequate treatment to prevent and heal pressure sores in order to cite a pressure sore deficiency at the actual harm level. However, the surveyor determined that an egregious past violation of federal standards involving this decedent warranted citing a deficiency known as past noncompliance and imposition of a civil monetary penalty. Because the deficiency occurred in the past and was assumed to have been corrected by the facility, a plan of correction was not required and no deficiency could be cited for the underlying care issue—inadequate treatment to prevent and heal pressure sores. Although Arkansas state survey agency officials told us that they frequently cite past noncompliance, we found that it was cited for only one coroner referral. For the remaining 11 substantiated coroner referrals, the state survey agency cited either no deficiency for the decedent or cited a deficiency at a level lower than actual harm for the predominant care problem identified by the coroner, even though the MFCU’s investigations found neglect for six of the decedents, in effect substantiating the existence of serious care problems in these cases (see table 6). The MFCU’s findings raise a question about the thoroughness of state survey agency complaint surveys. Because the nature of the problems identified by the coroner in these 11 referrals did not appear to differ significantly from referrals for home A that were substantiated at the actual harm or higher level (see table 5), we asked the state survey agency to review the 11 referrals to determine why no serious deficiencies were cited and if past noncompliance should have been cited. Noting their current heavy workload, state survey agency officials agreed to review 2 of the 11 cases. They told us that they could not cite an actual harm pressure sore deficiency for either decedent because the decedents were not in the facility at the time of the complaint investigations and under CMS guidance, surveyors would need to identify a current resident with inadequate treatment to prevent and heal pressure sores in order to cite a pressure sore deficiency at the actual harm level. In one of these cases, however, agency officials told us that they should have cited past noncompliance because of the serious nature of the decedent’s condition. On the basis of the MFCU’s investigations and our own review, we question the state survey agency’s decision not to substantiate more of the coroner’s referrals or forward them to another agency for further investigation. Overall, the state survey agency did not substantiate 14 of the 36 coroner referrals that it investigated. Although we did not assess each of the 14 unsubstantiated referrals in detail, the state survey agency’s findings for 7 decedents were challenged either by the results of the MFCU’s investigations or by an expert review conducted at our request. Both the MFCU and our expert noted omissions and contradictions in the medical records of some of the 14 decedents, raising a question about the state survey agency’s conclusions that the same records indicated care had been provided. The MFCU’s investigations identified neglect of two decedents that the state survey agency failed to substantiate. In one of the cases, the MFCU found that the nursing home failed to (1) accurately assess changes in the resident’s status, allowing the resident to develop stage II pressure sores before the staff was even aware that he had a skin problem; (2) track the resident’s ability to perform certain basic activities of daily living; (3) routinely monitor his weight despite continued weight loss; and (4) follow physician orders, sometimes delaying prescribed treatment. In the other case, the MFCU found that the nursing home failed to provide necessary treatment, rehabilitation, care, food, and medical services. In particular, the resident had no skin breakdown upon admission to the facility. But 7.5 months later, she had six pressure sores, including one on her right hip that was almost 4 inches across and had progressed to stage IV and two others that had progressed to stage III. There was no comprehensive care plan to address the resident’s pressure sores. Other care was also found negligent. For example, during a hospital stay about 2 months before the resident’s death, the hospital found a large area on the back of her tongue with a thick buildup of saliva that had not been properly cleaned at the nursing home for up to 7 days. For five other coroner referrals not substantiated by the state survey agency, the expert agreed that we had a basis to question the state survey agency’s findings. For example, the expert found that (1) some facilities were not removing the dead tissue around pressure sores; (2) the color of one decedent’s skin suggested it was urine stained, a situation that contributes to skin breakdown and infection; and (3) two decedents were not receiving oral care, the lack of which the expert characterized as “profound” for one decedent. For three of the five cases, the expert found evidence that neglect contributed to the residents’ physical condition as documented in the coroner’s referrals. In general, the expert found the degree of skin damage and pressures sores in the reviewed cases to be “very suspicious” and concluded that preventive measures, such as special mattresses, would have precluded the development of such severe pressure sores, despite the decedents’ health status. The expert also found the scarce and inconsistent mention of pain assessment and management to be suspicious enough to warrant concern about abuse. Although three of the five deceased residents were receiving hospice care at the nursing home, our expert questioned the apparent lack of care for these residents. Ideally, hospice care provides consistent pain assessment and intervention, measures to prevent further skin breakdown and the associated discomfort, and local treatment to minimize odor. These standards are inconsistent with not changing pressure sore dressings, even if a family member asks not to have them changed. Finally, our expert questioned if some of the facilities had a quality assurance process in place to identify systemic problems, such as the incidence of pressure sores. We found that the state survey agency had cited the facility where two of the five decedents had resided for immediate jeopardy regarding the federal requirements to maintain a quality assurance committee that meets regularly. This deficiency was cited about 9 months before and 9 months after the residents’ deaths. In two of the five cases, the state survey agency had concluded that serious pressure sores were acquired during hospitalizations but did not identify other care problems noted by our expert consultant. For example, one of the nursing homes failed to remove dead tissue around the pressure sores, an indication of poor care. In addition, the expert noted the lack of oral care for one of these decedents, again raising questions about the quality of care provided by the home. Even if the state survey agency had justifiably concluded that the decedents’ serious pressure sores were acquired during hospitalizations rather than in the nursing homes where the residents died, neither case was referred to Arkansas’s Division of Health Facility Services, the entity responsible for oversight of hospitals that serve Medicare and Medicaid beneficiaries. State survey agency officials agreed that it might have been appropriate to refer such cases to this division. CMS’s 1999 guidelines for complaint investigations instruct state survey agencies to refer cases to another agency when it lacks jurisdiction. Omissions and contradictions in the medical records for four other decedents whose referrals were not substantiated raise a question about the state survey agency’s conclusions that these same records indicated care had been provided. For example, in two cases, the MFCU found numerous omissions in the facility’s care and treatment records, such as missing entries on the medication records and nurse assistant flow sheets, as well as a discrepancy as to when a pressure sore was first noted. In another case, the MFCU concluded that there were so many documentation problems that it was difficult to follow the course of one decedent’s care, including late entries that were “questionable and too many.” In addition, in another case, our expert consultant found that the seriousness of a pressure sore was understated by the home. Federal surveyors also found evidence that state surveyors missed or failed to cite deficiencies, including some that harmed residents. A March 2000 federal comparative survey of an Arkansas nursing home, some of whose residents were the subject of coroner referrals, found care issues that had not been identified by the state survey agency. A comparative survey is conducted within 2 months of a state survey to independently verify its accuracy. Overall, federal surveyors cited 19 health-related deficiencies that state surveyors did not, including failure of the nursing home to develop and implement effective procedures to prevent neglect and abuse of residents. Three of the 19 deficiencies that state surveyors did not identify were cited by federal surveyors at the actual harm level: failure to provide (1) necessary care and services to maintain a resident’s highest well being; (2) good nutrition, grooming, and personal and oral hygiene; and (3) treatment and services to increase and prevent further degradation in a resident’s range of motion. Federal surveyors also cited a widespread failure in infection control procedures at the potential for more than minimum harm level. One of the coroner-referred deaths at this facility occurred within 6 weeks of both the state and federal surveys that were about 1 month apart. The decedent arrived in the hospital emergency room with a fever of 104°, an indication of infection, as well as ragged tears on his right knee and shin and serious pressure sores on both buttocks. Though documentation was not available, a state survey agency official told us that this complaint was unsubstantiated. Because of oversight weaknesses that are well-documented nationwide, neglect of nursing home residents may often go undetected. We found the same systemic oversight weaknesses in the Arkansas state survey agency’s investigation of coroner referrals that our prior work on nursing home quality of care identified nationwide. These oversight weaknesses include (1) complaint investigations that understated the seriousness of the allegations and were not conducted promptly; (2) annual standard survey schedules that allowed nursing homes to predict when the next survey would occur; (3) survey methodology weaknesses, coupled with surveyor reliance on misleading medical records, that resulted in overlooked care problems; and (4) a policy that did not always hold nursing homes accountable for care problems identified after a resident’s death. In 1999, we reported that many survey agencies in the 14 states we examined often assigned inappropriately low investigation priorities to complaints and failed to investigate serious complaints promptly. Such practices may delay the identification and correction of care problems that may involve other residents of a nursing home in addition to the resident who is the subject of the complaint. Based on our draft report, CMS reviewed the Arkansas state survey agency’s prioritization of the 36 coroner referrals the agency said it received. CMS concluded that about 31 percent of the referrals should have been prioritized for more prompt investigation. Furthermore, CMS found that 5 referrals prioritized by the state as requiring an investigation within 10 working days suggested the potential for immediate jeopardy and should have been prioritized for investigation within 2 working days. The state survey agency prioritized 6 other referrals as not requiring investigation for up to 45 days, but CMS indicated that 1 of these referrals should have been prioritized for investigation within 2 days, and the remaining referrals within 10 working days (actual harm). Although the state survey agency classified most of the 36 referrals as requiring investigation within 10 working days, we found a significant disparity between the prioritization it assigned and the time it actually took to conduct the investigations. As shown in figure 3, 16 referrals were investigated in 10 working days or less and 19 referrals took between 11 and 290 working days to investigate. Identifying time frames in terms of working days, as CMS’s guidance requires, however, understates the actual elapsed time between receipt and investigation of referrals. The average elapsed time from the date the survey agency received a referral until it initiated its investigation was 46 calendar days. Seven referrals were not investigated for between 91 and 425 calendar days and the investigation of an additional 11 referrals took between 21 and 90 calendar days (see fig. 3). State survey agency officials told us that because of surveyor turnover and the number of complaints received from all sources, the agency could not investigate all coroner complaints quickly; CMS has identified untimely complaint investigations in many other states. Moreover, Arkansas state survey agency officials told us that they gave priority to allegations involving residents who were still living in a facility over comparable allegations involving deceased residents, even though the coroner’s referrals were accompanied by photos that suggested the possibility of systemic care problems. In 1998 and subsequent work, we found that nursing homes could conceal care problems if they chose to do so because annual state surveys were often predictable. For example, a home could (1) significantly change its level of care, food, and cleanliness by temporarily augmenting its staff just prior to or during the period of the survey and (2) adjust resident records to improve the overall impression of the home’s care. We believe that the striking disparity between annual survey findings that failed to identify serious problems in preventing and treating pressure sores and the numerous instances of serious pressure sores identified by the coroner is partly the result of the predictability of annual surveys. In July 2003, we reported that standard surveys in Arkansas, as well as those nationwide, continued to be highly predictable. In 2003, we reported that the timing of 36 percent of Arkansas’s most recent surveys (34 percent nationwide) could have been predicted by nursing homes. We considered nursing home surveys predictable if homes were surveyed within (1) 15 days of the 1-year anniversary of their prior survey (28 percent for Arkansas) or (2) 1 month of the maximum 15-month interval between standard surveys (8 percent for Arkansas). The director of the Arkansas state survey agency acknowledged that the predictability of the state’s standard surveys allowed homes to mask problems by having more staff on hand during surveys. On the basis of the finding in our 2003 report, she told us she has tried to reduce survey predictability, in part by using computer programs to vary the timing of homes’ surveys. For 168 of Arkansas’s approximately 236 nursing homes surveyed since our last report (August 1, 2003, through June 22, 2004), 22.6 percent of the surveys were predictable. In 1998, we recommended that CMS segment the standard survey into more than one review throughout the year, simultaneously increasing state surveyor presence in nursing homes and decreasing survey predictability. Although CMS disagreed with segmenting the survey, it did recognize the need to reduce predictability. CMS directed states in 1999 to (1) begin at least 10 percent of standard surveys outside the normal workday (either on weekends, early in the morning, or late in the evening) and (2) avoid scheduling, if possible, a home’s survey for the same month of the year as the home’s previous standard survey. We reported previously that CMS’s focus on so-called staggered surveys, though beneficial, was too limited to reduce survey predictability. Our 1998 work on California nursing homes revealed that surveyors may overlook significant care problems because (1) the federal survey protocol they follow does not rely on an adequate sample for detecting potential problems and their prevalence and (2) some resident medical records omit or contain misleading information. Because CMS has not yet completed the redesign of the survey methodology, nearly 7 years later Arkansas surveyors, as well as those in other states, still rely on a flawed survey methodology to detect resident care problems. As noted earlier, omissions and contradictions in the decedents’ medical records, as well as the coroner’s photos, sometimes raised questions about whether appropriate care had been provided in cases the state survey agency did not substantiate. Our 1998 report recommended that CMS revise federal survey procedures by using a stratified random sample of resident cases and reviewing sufficient numbers and types of resident cases. Under development since 1998, CMS’s redesigned survey methodology is intended to more systematically target potential problems at a home and give surveyors new tools to better document care outcomes and conduct on-site investigations. Use of the new methodology could result in survey findings that more accurately portray the quality of care provided by a nursing home to all residents. CMS officials told us that the new methodology would be piloted in 2005 in conjunction with an evaluation that compares its effectiveness with that of the current survey methodology. Our work in Arkansas suggested the existence of sampling problems, underscoring the importance of implementing the revised survey methodology. For example, three residents with serious pressure sores who died on March 7, March 29, and April 3, 2000, and were the subject of coroner referrals were not included in the resident sample for one home’s March 3, 2000, annual standard survey. The survey failed to identify any pressure sore or other quality of care deficiencies. It is difficult to understand how residents with such serious care problems could have been omitted from the survey. In addition, the extent of the physical deterioration of some decedents where the MFCU identified neglect but the state survey agency did not find similar problems for current residents also raises a question about state survey agency sampling methodology because the seriousness of decedents’ conditions suggested that care problems were systemic. In some coroner referrals that the state survey agency did not substantiate, surveyors noted that the medical records indicated that care had been provided. However, the MFCU found omissions and contradictions in decedents’ medical records, including missing entries and late entries that were “too many and questionable.” The medical record for one decedent showed the resident’s height as 10 inches different from the height in her nutritional assessment (height is an important factor in determining a resident’s appropriate weight). Since surveyors screen residents’ medical records for indicators of improper care, misleading or inaccurate data may result in care deficiencies being overlooked. We also found evidence that Arkansas surveyors took medical records at face value even when these records were contradicted by color photos that documented decedents’ physical conditions. For example, our expert consultant found that the coroner’s photos of one decedent clearly showed that dead tissue around pressure sores had not been removed even though the state surveyor cited medical records indicating such care was provided just 11 days before the resident’s death. The coloration of the same decedent’s skin also suggested that she was left in her own waste for extended periods. However, the surveyor noted that the family’s concern about staff’s unresponsiveness to resident call lights was not substantiated because residents who were interviewed said that staff response was prompt. In our current work, we found that many Arkansas nursing homes with coroner referrals escaped accountability for providing poor care when the state survey agency investigated the neglect of nursing home residents after their deaths. We believe that CMS’s vague policy on past noncompliance is partly responsible for this situation. First, the Arkansas state survey agency did not always cite past noncompliance when warranted. For example, the MFCU found that nursing homes had neglected eight decedents referred by the coroner but the state survey agency either cited no deficiency for the decedents, cited a deficiency at a level lower than actual harm for the predominant care problems identified by the coroner, or found the referrals to be unsubstantiated. According to state survey agency officials, care problems similar to those of the decedents were not identified in a sample of current residents and, under CMS policy, the decedents’ care problems were assumed to have been identified and corrected by the home. Second, for the one coroner referral that the Arkansas state survey agency did cite for past noncompliance, the home was not required to prepare a plan of correction because no current deficiency was identified. When past noncompliance is identified, it is recorded in OSCAR and on CMS’s Nursing Home Compare Web site simply as past noncompliance without additional information on the specific deficient practice(s), such as failure to prevent and treat pressure sores. Moreover, CMS policy discourages citing past noncompliance unless the violation is egregious. Although CMS officials indicate that “egregious” includes noncompliance related to a resident’s death, the term is undefined and is not used in CMS’s scope and severity grid, which defines serious deficiencies as actual harm or immediate jeopardy. According to CMS officials, the objective of its survey policy is to focus surveys on current residents and care problems rather than on poor care provided in the past. We question CMS’s assumption that if a decedent’s care problem is not found to affect other residents at the time of a complaint investigation, it was identified earlier by the home and corrected. On the basis of our past work, it is also possible that the state survey agency’s complaint investigation missed serious care issues. CMS and Arkansas state survey agency officials agreed that the poor physical condition of the decedents referred by the coroner suggested the existence of systemic care problems. The Arkansas law requiring coroner investigations of nursing home residents’ deaths has helped to demonstrate that a small number of residents died in deplorable physical condition. The Arkansas law also confirmed the systemic weaknesses in state and federal oversight of nursing home quality of care that we identified in prior reports. On the basis of our prior work, we believe it is likely that serious care problems similar to those identified by the Pulaski County coroner exist in other Arkansas counties and in other states. Despite Arkansas’s annual standard surveys and intervening complaint investigations, the negligent care provided to some residents before they died was never detected. In addition, complaint investigations initiated by the state survey agency in response to coroner referrals often failed to cite deficiencies for serious care problems that, according to the MFCU’s investigations and our expert consultant, constituted or suggested neglect. Even when the Arkansas state survey agency found the neglect to be egregious, it did not hold the nursing home accountable by citing a little used deficiency known as past noncompliance. We believe that CMS’s policy on past noncompliance is flawed for three reasons. First, the policy involves considerable ambiguity. CMS does not define what constitutes an egregious violation yet implies that one exists where care problems relate to a resident’s death, which is often difficult to demonstrate without an autopsy. Moreover, the term egregious is not clearly related to CMS’s scope and severity grid, which defines serious deficiencies as actual harm or immediate jeopardy. Second, CMS’s policy on past noncompliance does not hold homes accountable for negligence associated with a resident’s death unless similar care problems are identified for current residents. CMS assumes that (1) similar care problems were not found because they have already been identified and corrected by the home and (2) the state survey agency did not miss the deficiency for current residents. However, our prior work demonstrated, and our work in Arkansas confirmed, that (1) nursing home records can contain misleading information or omit important data, making it difficult for surveyors to identify care deficiencies during their on-site reviews and (2) states’ surveys of nursing homes do not identify all serious deficiencies, such as preventable weight loss and pressure sores. Third, the policy obscures the nature of the specific care problem, such as avoidable pressure sores, because the only deficiency reported in OSCAR and to the public on CMS’s Nursing Home Compare Web site is “past noncompliance.” We believe that the goal of preventing resident neglect by requiring nursing homes to comply with federal quality standards is inconsistent with a policy that discourages citing deficiencies because the harm was simply not egregious enough or was potentially missed for current residents. We recommend that the Administrator of CMS revise the agency’s current policy on citing deficiencies for past noncompliance with federal quality standards by taking the following two actions: hold homes accountable for all past noncompliance resulting in harm to residents, not just care problems deemed to be egregious, and develop an approach for citing such past noncompliance in a manner that clearly identifies the specific nature of the care problem both in the OSCAR database and on CMS’s Nursing Home Compare Web site. We provided a draft of this report to CMS; the Arkansas Department of Human Services, Office of Long Term Care (the state survey agency); the Arkansas MFCU; and the Pulaski County coroner. We received written comments from CMS and the survey agency, and oral comments from the coroner. The MFCU stated that it did not have comments. CMS concurred with our recommendations to revise its policy on citing deficiencies for past noncompliance and also identified more than a dozen additional initiatives it plans to take to address shortcomings in the nursing home survey process. CMS commented that the focus of its initiatives, such as additional guidance on the scope and severity of deficiencies, would be broad, in effect supporting our conclusion that the shortcomings we identified were systemic and not limited to Arkansas. CMS and the state survey agency raised concerns about (1) the discrepancy we reported between the number of referrals the coroner said he made (86) and the number the survey agency said it received (36) and (2) the relevance of survey predictability to complaint investigations based on coroner referrals. In addition, the state survey agency commented that we had understated the number of investigations it actually conducted. (CMS’s comments are reproduced in app. III, and the state survey agency’s comments are reproduced in app. IV.) Our evaluation of CMS, survey agency, and coroner comments covers the following six areas: CMS’s past noncompliance policy, shortcomings in state survey agency investigations, lessons from implementing the Arkansas law, the number of coroner referrals and survey agency investigations, survey predictability and methodology redesign, and the impact of the Arkansas law. CMS agreed with our recommendations to revise its past noncompliance policy. We found that some nursing homes were not held accountable for serious deficiencies, even though some coroner referrals were substantiated, because of flaws in CMS’s policy governing past noncompliance. Following a planned review of the policy, CMS indicated that it would (1) clarify expectations for the manner in which state survey agencies should address past deficiencies that have only recently come to light, (2) further define important terms, particularly egregious, (3) ensure that the specific nature of the care problems was identified in OSCAR, and (4) strengthen criteria for determining whether a nursing home had actually taken steps to address deficiencies that contributed to past noncompliance. CMS did not indicate whether it also planned to identify the specific nature of deficiencies associated with past noncompliance on its Nursing Home Compare Web site, but we continue to believe that posting such information would provide valuable assistance to consumers. Because of the seriousness of the shortcomings identified in our report, CMS sent a clinical fact-finding team to Arkansas for 3 days after receiving a draft of our report. The CMS clinical team found that some, but not all, of the referrals for which lower-level deficiencies were cited should have received a higher-level severity rating. In addition, from among six coroner referrals that were not substantiated by the survey agency, the team believed two should have been substantiated, a higher disparity rate than CMS said it has typically found for Arkansas surveys in general. As a result of its team’s visit, CMS concluded that additional training and clarification of its guidance were warranted, including (1) increased training for state surveyors in determining the appropriate scope and severity of deficiencies as well as the development of additional CMS guidance and analysis of patterns in state deficiency citations and (2) the development of an advanced course in complaint investigations to be piloted in Arkansas and evaluated for potential expansion and replication nationwide. CMS noted that these initiatives would be applied broadly, a recognition that the shortcomings we identified were systemic and not limited to Arkansas. While we fully support CMS’s new initiatives, timely and sustained follow- up to ensure effective implementation is critical; earlier CMS initiatives to address these same problems were not timely or were ineffective. We reported in July 2003 that CMS began a complaint improvement project in 1999 but did not provide more detailed guidance to states until almost 5 years later. Similarly, we reported that CMS began developing more structured guidance for surveyors in October 2000 to address inconsistencies in how the scope and severity of deficiencies are cited across states, but the first installment on pressure sores had not yet been released as of September 2004. Our 2003 report also noted that CMS began annual reviews of a sample of deficiency citations from each state in October 2000 to identify shortcomings and the need for additional training, but CMS’s recognition that additional guidance and training are required raises a question about the sufficiency and effectiveness of these reviews. Furthermore, we believe that other factors may be contributing to survey shortcomings. Our 2003 report noted that some state officials cited inexperienced surveyors, the result of a high turnover rate, as a factor contributing to the understatement of serious quality of care deficiencies. CMS commented that the photos conveyed from the coroner’s office were graphic, serious, and require careful investigation. The CMS clinical team found that the photos were very helpful in a number of investigations. We agree with CMS’s view that the photos alone do not represent sufficient evidence to render a conclusion that there was poor care, neglect, or avoidable outcomes, or that the nursing home caused the death. On the basis of its visit to Arkansas, the CMS clinical team concluded that not all referred cases could be substantiated with the photos, medical records, and other information available to it; as we noted in the report, our expert consultant reached the same conclusion on two of the seven cases she reviewed. We nevertheless continue to believe that the state survey agency at times appeared to dismiss photographic evidence of potential neglect and to rely instead on observations of and interviews with current residents. In response to our findings, CMS said it would study the issues involved in the use of photos and would issue additional guidance for use by state survey agencies. CMS made a number of observations about lessons from the Arkansas experience that would improve the effectiveness of mandatory reporting systems, such as the coroner referrals required by the Arkansas law. These lessons related to the implementation of the Arkansas law by local coroners and the quality and timeliness of referrals made by the Pulaski County coroner. We agree that these factors are important to the ability of state survey agencies to promptly and effectively complete their own investigations based on coroner referrals of potential neglect. However, because we lack the authority to evaluate the implementation of state laws, we excluded such an analysis from the scope of our work. We do have the authority to evaluate the performance of federally funded entities—such as the state survey agency and the MFCU—that are responsible for ensuring that Medicare and Medicaid nursing home residents receive quality care, and we therefore focused our work on how these entities responded to the cases referred to them. In particular, CMS highlighted the lack of referrals from most Arkansas coroners and the processes followed by coroners, primarily the Pulaski County coroner, in making referrals to the state survey agency. During our interviews, the Pulaski County coroner and MFCU officials demonstrated their awareness of the absence of an enforcement mechanism in the state law to ensure that nursing homes and coroners comply with the law; the Pulaski County coroner told us that he intends to pursue this issue with the state legislature. According to CMS, the quality of the documentation provided by coroners did not conform to key principles of forensic science, such as embedded photo dating and subject identification, photo scale metrics and color charting, and interviews with residents’ physicians. While the coroner referrals may have lacked these features, the referral packages we examined clearly identified the decedents, the time the coroner’s office was notified of the deaths, and the time the coroner’s staff arrived at the homes. It is also clear from the documentation that the photos were taken shortly after death. Requiring such a level of forensic evidence from the coroner substantially exceeds the burden of proof the state survey agency requires for other complaints filed, which is how the coroner referrals are treated. The coroner referrals are intended to be the starting point for the state’s investigation, not a substitute for its own thorough investigation. Both CMS and the state survey agency expressed concern about the elapsed time between the dates of death and the receipt of coroner referrals by the survey agency. In particular, they noted that our analysis excluded five referrals the coroner made in 2004 that related to deaths in 2003, with the elapsed times from the deaths to receipt of the referrals ranging from 222 to 400 days. We excluded these five referrals because they had not yet been referred when we completed our data collection for this report, which covered referrals for the period July 1999 through December 2003. In principle, we agree with CMS’s view that the value of a timely investigation by the state survey agency can be influenced by the length of time associated with referrals, even though we found that the coroner’s referral of several cases up to 4 months after the residents’ deaths did not appear to have handicapped the investigations. For example, the state survey agency substantiated three coroner referrals with deficiencies at the actual harm and immediate jeopardy level even though the referrals were not received for between 65 and 106 calendar days after residents’ deaths. Although the survey agency did not substantiate one coroner referral that was not received until 102 days after the resident’s death, the MFCU found neglect. For the 36 referrals the survey agency said it received from the coroner for the period we analyzed, the average elapsed time from the date of death until the coroner made his referral was 38 days (ranging from zero to 180 days), whereas the average elapsed time from the date the survey agency received the referral until it initiated its investigation was 46 days (ranging from zero to 425 days). Notwithstanding these elapsed times for coroner referrals and state investigations, CMS commented that it would study its priority criteria for complaint triage and refine its policy with regard to the treatment of and response to complaints. Both CMS and the state survey agency questioned the validity of the number of Pulaski County coroner referrals, commenting that we lacked independent verification of the number actually referred; they also believed that the report’s language suggested referrals had been received but not investigated. We revised the report to make it clear that the coroner told us he had referred 86 cases of suspected neglect of deceased nursing home residents to the state survey agency and the MFCU for investigation (and, as noted below, we reviewed the related case documentation for each of the 86 referrals). We also revised the report to clarify that the state survey agency investigated the 36 coroner referrals that it told us it had received. CMS asserted that the coroner was unable to provide its clinical team with a list of his referrals; however, CMS’s comments do not reflect that the coroner’s case files were not automated. We compiled a list of the 86 referrals ourselves. Our list was based on documentation provided by the coroner for each of the cases he told us he referred, including a narrative summary describing the suspected neglect, copies of decedents’ medical records, autopsy reports, and photos documenting the decedents’ conditions. Although the state survey agency and the MFCU told us that they did not receive all 86 coroner referrals, we believe that the MFCU’s receipt of almost three fifths of the coroner’s referrals (compared with the state survey agency’s receipt of fewer than half) provides independent corroboration that the Pulaski County coroner made more than 36 referrals during the 4.5-year period we examined. As noted in the report, the coroner was instrumental in securing passage of the law, a fact that is inconsistent with the suggestion that the coroner withheld referrals. To address the disparity in the number of referrals the coroner told us he made and the number the state survey agency and the MFCU told us they received, the coroner began requiring signed receipts in March 2004, a practice reflected in our draft report. The state survey agency commented that we had understated the number of investigations of nursing home deaths it had conducted. The agency identified 22 investigations that, in most cases, were based on the receipt of a complaint from individuals other than the coroner. We excluded 9 of these 22 investigations because they were conducted prior to the residents’ deaths. For example, one complaint of alleged rape of a 91-year-old resident was filed by a hospital that found the resident had a sexually transmitted disease. The complaint was not substantiated. The coroner’s investigation of the resident’s death 5 months later resulted in a referral based on seven serious pressure sores on the decedent’s feet, lower back, and hips, a problem that was not noted during the hospitalization. We revised our analysis to include 1 of the 22 cases because the coroner confirmed that he had indeed made the referral. Thus, we adjusted the number of coroner referrals from 85 in the draft report to 86 in the final report. We also revised the number of referrals the state survey agency said it received from 35 to 36. We confirmed that this additional referral was not received or investigated by the MFCU. For 7 cases, we determined that the allegations in the non-coroner complaints were similar to the concerns raised by the coroner’s investigations and have added footnotes in the appropriate sections of the report, depending on whether the investigations substantiated (2 complaints) or did not substantiate (5 complaints) the complainants’ allegations. For the remaining 5 cases, we made no changes in the report. In one case, the survey agency’s complaint investigation focused on an issue different from the suspected neglect identified by the coroner. In four other cases, the agency included the decedents’ records in its resident samples during standard surveys. The decedents were not included in any deficiencies cited during these surveys and, importantly, the surveyors lacked the coroner’s photos of pressure sores, which would have been particularly useful in raising questions about the care provided as documented in the decedents’ medical records. Both CMS and the state survey agency questioned the relevance of survey predictability to complaint investigations resulting from coroner referrals and suggested we delete this analysis from the final report. Neither organization commented on our assessment of the impact of survey methodology weaknesses and misleading medical records on detecting quality-of-care problems. We retained this analysis in the final report because we believe the issues of survey predictability and methodology are relevant to state survey complaint investigations of coroner referrals. Our 1998 and subsequent work found that predictable surveys allowed homes so inclined to (1) significantly change the level of care, food, and cleanliness by temporarily augmenting staff just prior to or during a survey, and (2) adjust resident records to improve the overall impression of the home’s care. We also reported in 1998 that surveyors may overlook significant care problems during annual surveys because of survey methodology weaknesses and omissions or misleading information in resident medical records. Although the predominant care problem identified in 67 percent of the coroner’s referrals involved serious pressure sores, most of the nursing homes referred had not been cited for a pressure sore deficiency at the actual harm level or higher on any of their previous four standard surveys. We believe that the striking disparity between annual survey findings and the predominant care problems identified by the coroner relates to the predictability of annual surveys, weaknesses in survey methodology, and misleading medical records—all of which contribute to the phenomenon of undetected care problems. Our work in Arkansas suggested the existence of sampling problems in a home whose annual survey failed to detect any quality-of-care problems, even though three residents, all with serious pressure sores, died within 1 month. The fact that none of these residents was included in the nursing home’s annual standard survey underscores the importance of implementing a revised survey methodology that CMS has had under development for 7 years. Our report also provides several examples where misleading medical records contributed to the failure of the Arkansas state survey agency to detect care problems that the MFCU or our expert consultant identified and were obvious in some of the coroner’s photos of decedents. CMS further commented that our analysis of survey predictability resurrected prior reports and recommendations to which CMS has previously responded and that we failed to acknowledge CMS and state survey agency progress in reducing survey predictability. We believe that CMS’s comments are inaccurate. In our 1998 report, we recommended segmenting the survey into more than one review throughout the year to reduce survey predictability. CMS responded to this recommendation by requiring that 10 percent of state annual surveys be conducted on weekends, at night, or early in the morning. Despite CMS’s introduction of “off hour” surveys, we reported in 2003 that about one-third of state surveys remained predictable (36 percent in Arkansas). Contrary to CMS’s comments, the draft report did acknowledge that Arkansas appeared to be making progress in reducing survey predictability through the use of computer programs to vary the timing of homes’ surveys. In oral comments, the Pulaski County coroner indicated that our report was fair and accurate. He also told us that he believes the law has had a significant, positive impact on the quality of care provided to nursing home residents in Pulaski County. In particular, he rarely finds decedents with serious pressure sores and the pressure sores he does find are not as serious as those in earlier referrals. He also cited the declining number of referrals—only six 2003 resident deaths were referred compared to 18 in 2002. He also provided technical comments that we incorporated as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Administrator of the Centers for Medicare & Medicaid Services and appropriate congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-7118 or Walter Ochinko, Assistant Director, at (202) 512-7157 if you or your staffs have any questions. GAO staff who made key contributions to this report include Jack Brennan, Lisanne Bradley, Patricia A. Jones, and Elizabeth T. Morrison. Nursing Home Fire Safety: Recent Fires Highlight Weaknesses in Federal Standards and Oversight. GAO-04-660. Washington, D.C.: July 16, 2004. Nursing Home Quality: Prevalence of Serious Problems, While Declining, Reinforces Importance of Enhanced Oversight. GAO-03-561. Washington, D.C.: July 15, 2003. Nursing Homes: Public Reporting of Quality Indicators Has Merit, but National Implementation Is Premature. GAO-03-187. Washington, D.C.: October 31, 2002. Nursing Homes: Quality of Care More Related to Staffing than Spending. GAO-02-431R. Washington, D.C.: June 13, 2002. Nursing Homes: More Can Be Done to Protect Residents from Abuse. GAO-02-312. Washington, D.C.: March 1, 2002. Nursing Homes: Federal Efforts to Monitor Resident Assessment Data Should Complement State Activities. GAO-02-279. Washington, D.C.: February 15, 2002. Nursing Homes: Success of Quality Initiatives Requires Sustained Federal and State Commitment. GAO/T-HEHS-00-209. Washington, D.C.: September 28, 2000. Nursing Homes: Sustained Efforts Are Essential to Realize Potential of the Quality Initiatives. GAO/HEHS-00-197. Washington, D.C.: September 28, 2000. Nursing Home Care: Enhanced HCFA Oversight of State Programs Would Better Ensure Quality. GAO/HEHS-00-6. Washington, D.C.: November 4, 1999. Nursing Homes: HCFA Should Strengthen Its Oversight of State Agencies to Better Ensure Quality of Care. GAO/T-HEHS-00-27. Washington, D.C.: November 4, 1999. Nursing Home Oversight: Industry Examples Do Not Demonstrate That Regulatory Actions Were Unreasonable. GAO/HEHS-99-154R. Washington, D.C.: August 13, 1999. Nursing Homes: HCFA Initiatives to Improve Care Are Under Way but Will Require Continued Commitment. GAO/T-HEHS-99-155. Washington, D.C.: June 30, 1999. Nursing Homes: Proposal to Enhance Oversight of Poorly Performing Homes Has Merit. GAO/HEHS-99-157. Washington, D.C.: June 30, 1999. Nursing Homes: Complaint Investigation Processes in Maryland. GAO/T-HEHS-99-146. Washington, D.C.: June 15, 1999. Nursing Homes: Complaint Investigation Processes Often Inadequate to Protect Residents. GAO/HEHS-99-80. Washington, D.C.: March 22, 1999. Nursing Homes: Stronger Complaint and Enforcement Practices Needed to Better Ensure Adequate Care. GAO/T-HEHS-99-89. Washington, D.C.: March 22, 1999. Nursing Homes: Additional Steps Needed to Strengthen Enforcement of Federal Quality Standards. GAO/HEHS-99-46. Washington, D.C.: March 18, 1999. California Nursing Homes: Federal and State Oversight Inadequate to Protect Residents in Homes With Serious Care Problems. GAO/T-HEHS- 98-219. Washington, D.C.: July 28, 1998. California Nursing Homes: Care Problems Persist Despite Federal and State Oversight. GAO/HEHS-98-202. Washington, D.C.: July 27, 1998.
GAO was asked to assess the effectiveness of nursing home oversight by considering the effect of a unique Arkansas law that requires county coroners to investigate all nursing home deaths. Coroners refer cases of suspected neglect to the state survey agency and law enforcement entities such as the state Medicaid Fraud Control Unit (MFCU). The Centers for Medicare & Medicaid Services (CMS) contracts with survey agencies in every state to periodically inspect nursing homes and investigate allegations of poor care or neglect. MFCUs are charged with investigating and prosecuting resident neglect. GAO examined (1) the results of Arkansas coroner investigations, (2) the state survey agency's experience in investigating coroner referrals, and (3) whether weaknesses in state and federal nursing home oversight identified in prior GAO reports were evident in the survey agency's investigation of coroner referrals. According to the Pulaski County coroner, he referred 86 cases of suspected resident neglect to the state survey agency for the period July 1999, when the Arkansas law took effect, through December 2003. Agency officials said that other state coroners referred four cases during this time period. Importantly, these 86 referrals constituted just 2.2 percent of all nursing home deaths the coroner investigated. However, the referrals included disturbing photos and descriptions of the decedents, suggesting serious, avoidable care problems; more than two-thirds of the 86 referrals listed pressure sores as the primary indicator of neglect. Some photos of decedents' pressure sores depicted skin conditions so deteriorated that bone or ligament was visible, as were signs of infection and dead tissue. The referrals involved 27 homes, over half of which had at least 3 referrals. Arkansas state survey agency officials told GAO that they received 36 (fewer than half) of the Pulaski County coroner's referrals. The 50 referrals not received described decedents' conditions similar to those the survey agency did receive. Of the 36 referrals for alleged neglect that it received, the survey agency complaint investigations substantiated 22 and eventually it closed the home with the largest number of referrals. However, the agency's investigations often understated serious care problems--both when neglect was substantiated and when it was not. For 11 of the 22 substantiated referrals, the state survey agency either cited no deficiency for the decedent or cited a deficiency at a level lower than actual harm for the predominant care problem identified by the coroner. In contrast, MFCU investigations of many of the 11 referrals found the homes negligent in caring for decedents, and the MFCU reached settlements with the owners of several homes. In half of the 14 referrals not substantiated, the MFCU or an independent expert in long-term care either found neglect or questioned the "not substantiated" finding. Moreover, they found gaps and contradictions in the medical records for some decedents, raising a question about the survey agency's conclusions that the same records indicated appropriate care had been provided. GAO's prior work on nursing home quality of care found that weaknesses in federal and state oversight nationwide contributed to serious, undetected care problems indicative of resident neglect. GAO's review of the Arkansas survey agency's investigations of coroner referrals confirmed that serious, systemic weaknesses remain. Oversight weaknesses GAO previously identified nationwide and those it found in Arkansas included (1) complaint investigations that understated the seriousness of allegations and were not timely; (2) predictable timing of annual state surveys that could enable nursing homes so inclined to cover up deficiencies; (3) survey methodology weaknesses, coupled with surveyor reliance on misleading medical records, that resulted in missed care problems; and (4) a policy that did not always hold homes accountable for neglect associated with a resident's death.
Foodborne illnesses in the United States are widespread and costly. While the magnitude of the problem is uncertain, we reported in May 1996 that studies have estimated up to 81 million cases of foodborne illnesses and as many as 9,100 deaths occur each year. Recent estimates suggest that the number of illnesses may be even higher. While there is a wide range of estimates, according to the U.S. Department of Agriculture, the cost of these illnesses and deaths, measured in medical treatment and productivity losses, have been estimated to range from $7 billion to $37 billion a year. A significant amount of the food we consume is imported, and the percentage is growing. For example, between 1980 and 1995, the imported share of all fresh fruit consumed by the American public rose from about 24 percent to about 33 percent, and the imported share of seafood rose from about 45 percent to about 55 percent. FDA estimates that the volume of imported fruits and vegetables will grow by 33 percent between now and 2002. The sheer volume of these imports, along with the difficulty in ensuring that they are safe, adds to the risk of foodborne illnesses and makes it essential that steps to ensure their safety are effective. Some of these imported foods pose especially significant risks of foodborne illness. They can introduce pathogens previously uncommon in the United States, such as new strains of Salmonella and the Cyclospora parasite. In 1996 and 1997, outbreaks of foodborne illness linked with the Cyclospora parasite in raspberries from Guatemala affected nearly 2,500 people in the United States and Canada, causing prolonged gastrointestinal distress and other painful symptoms. In addition, imported foods may contain pathogens, such as hepatitis A, that cannot be easily detected by examination or even laboratory analysis. FSIS has jurisdiction over meat, poultry, and some egg products, while FDA regulates all other foods. FSIS and FDA work closely with Customs and CDC. Customs refers imported foods to FSIS or FDA for their review before releasing the shipment into U.S. commerce. CDC monitors the incidence of foodborne illness, works with state and local health departments to investigate outbreaks of illness, and collaborates with FSIS, FDA, and others to conduct research on foodborne diseases. As we have reported numerous times, the U.S. food safety system is characterized by a fragmented organizational structure with numerous agencies implementing a hodgepodge of inconsistent regulations and laws. This lack of a uniform, risk-based approach has adversely affected our nation’s ability to protect itself from a host of domestic food safety problems. That same fragmented structure and inconsistent regulatory approach is being used to ensure the safety of imported foods as well. To ensure the safety of meat and poultry imports, FSIS has a statutory mandate to require that each country wishing to export meat and poultry products to the United States demonstrate that it has an equivalent food safety system. As of January 1998, FSIS had certified the eligibility of 37 countries for exporting meat and poultry to the United States. FSIS has used equivalency authority to shift most of the responsibility for food safety to the exporting country, which performs the primary inspection of products before they reach the United States. This approach allows FSIS to leverage its resources by focusing its reviews on verifying the efficacy of exporting countries’ systems rather than by relying primarily on ineffective, resource-intensive port inspections to ensure the safety of imported foods. In contrast, FDA, although it is expected to ensure that imported fruits and vegetables and other foods meet U.S. standards, does not have a similar equivalency authority and therefore cannot require that countries exporting food products to the United States have safety systems in place that are equivalent to ours. As a result, FDA must rely primarily on selecting and testing import samples at ports of entry to ensure that foods are safe. Such an approach has been widely discredited by the United Nations Food and Agriculture Organization, an FDA Advisory Committee, and our own analyses as ineffective because individual product samples tested at the ports of entry may not represent the health risks of all shipments from that exporter. To exacerbate matters, FDA has been unable to keep pace with increasing imports, and its inspection coverage has fallen from an estimated 8 percent of import shipments in fiscal year 1992 to an estimated 1.7 percent in fiscal year 1997. Given the ineffectiveness of port-of-entry inspections, FDA cannot realistically ensure that unsafe foods are kept out of U.S. commerce. Even if FDA could inspect more shipments at ports of entry than it currently does, such an approach would still provide little assurance that imported foods are picked, processed, and packed under sanitary conditions because inspectors have no assurance that the exporting country has an effective food safety system. An equivalency requirement would allow FDA to share the burden of ensuring safety with the exporting country and allow it to make better use of limited resources. FDA agrees it needs such authority but believes the authority should be discretionary, so that equivalency could be applied when FDA believes it is most appropriate, thus limiting disruptions in trade. In our April 1998 report we recommended that equivalency should be mandatory for all imported foods, but the requirement could be phased in, so that it would not disrupt trade. Such mandatory authority would (1) impel FDA to take a proactive approach to preventing food safety problems, instead of requiring equivalency in countries after problems become apparent and (2) enable FDA to leverage its staff resources by sharing responsibility for food safety with exporting countries. FSIS and FDA use computer systems to review information on each import shipment and to help identify the import shipments requiring inspector action. However, neither agency’s system takes maximum advantage of available data to target those imported foods posing the greater health risks. Each agency has opportunities to use its resources more effectively. FSIS relies primarily on the violation history of previous shipments from the exporting firm to target entries for inspections or laboratory tests, but the violation history may not always indicate the shipments more likely to pose health threats. For example, many violations, such as incorrect shipping labels, may not directly affect consumer safety. In 1996, about 86 percent of FSIS’ refused shipments, excluding those refused entry for transportation damage, were not directly related to health risks such as excessive residues, microbiological contamination, unsound condition, or defects caused by disease. Nevertheless, these violations triggered a series of inspections on subsequent shipments of the same product from the same exporting firm until at least 10 consecutive shipments were found to be in compliance. When limited resources are targeted in this fashion, fewer resources are available for products posing greater health risks. FSIS could further improve its automated screening system if it developed information on patterns of violations, which would allow it to determine whether Salmonella contamination, for example, was a recurrent problem in a particular country or an exported product and increase its inspection frequencies for such shipments. FSIS possesses raw data on those problems but has not designed its computer system to use these data to identify patterns of violations, such as firms or countries with repeated problems, that are directly related to food safety. According to FSIS, the agency will consider modifying its automated screening system to identify patterns on violations when it redesigns the system this year. FDA’s system for selecting imports for examination relies heavily on inspectors’ judgment. To help its inspectors make informed judgments, FDA provides a number of tools, such as annual work plans, compliance programs, and databases containing historical or other pertinent information to inspectors. However, these tools are often confusing, inconsistent, or not readily available to FDA inspectors and hence provide guidance of little practical value. Specifically, FDA’s annual work plans set the number of activities, such as the number of inspections and tests each FDA district is to conduct for the 10 specific food programs that cover imports. Each day, the inspectors attempt to select shipments on the basis of the work plan’s targets. According to FDA, its compliance programs, not the work plans, contain specific guidance on inspection requirements. However, we found that FDA inspectors rely on the numerical inspection targets set forth in the annual work plan for guidance. These targets are sometimes inconsistent with the direction given in the compliance programs. Such inconsistency in guidance for inspectors serves only to distract and confuse them as they attempt to carry out their duties on a daily basis. Moreover, FDA’s computer system for screening imported food shipments is not programmed to help inspectors effectively use laboratory test results, violation histories, and other information on shipments to identify those shipments posing the greatest food safety risks. With respect to laboratory tests, FDA has not integrated its laboratory database with its automated import screening system; thus, inspectors do not have the results of prior laboratory tests available when making decisions on which imported products to inspect. Furthermore, FDA inspectors do not have ready access to some useful data on previous violations by foreign plants in the automated import screening system when making their decisions on which products to inspect. For example, FDA has databases with information on prior violations by foreign plants or countries and information on registrations of foreign firms producing certain canned foods, but the automated import screening system cannot review the databases, and the process for having the inspectors do so can be cumbersome and time-consuming. To obtain these data, inspectors must close their automated import screening system and open the other databases. We observed this process and found that it took 3 to 10 minutes each time the inspector wanted to switch from one database to another. Given that inspectors may have to process as many as 200 shipments per day, not all inspectors bother changing databases to look for this information. Instead, inspectors told us, they often rely on their memory of the information in the database or notes. Because inspectors have these difficulties in obtaining needed data on health-related risks and are under time pressure, they decide which samples to select on the basis of incomplete information. As a result, inspectors may rely on individual biases. For example, one inspector told us he believed one country did not have sanitary facilities and therefore assumed that all food products imported from that country were contaminated with filth. This inspector routinely selected samples of food from that country for filth tests, although the laboratory staff told us that such tests were lower priority than tests for microbiological contamination and therefore were frequently not conducted. As a result, the resources used to select these samples were not effectively used. According to FDA officials, the agency received funds to enhance the screening system in fiscal year 1998 and will begin integrating the databases (the Laboratory Management System, the Import Alert Retrieval System, and the Low-Acid Canned Food database) with the automated import screening system this year. Finally, the information identifying the contents of imported food shipments is, in most cases, entered directly into an automated import processing system by importers, some of whom have an incentive to misrepresent their goods in the interest of avoiding inspectors’ scrutiny. Importers who have demonstrated competency with the electronic system, known as paperless filers, are allowed to enter shipping information into the system without providing actual shipping documents to FDA. To ensure accuracy, FDA retrospectively verifies a sample of the importer-provided information and, according to its guidelines, may withdraw paperless filing privileges from filers with error rates of 10 percent or higher. However, FDA records show that no corrective actions to withdraw paperless filing privileges have been taken for even the most error-prone paperless filers. According to a January 1998 FDA survey, over 300 paperless filers, nearly 15 percent of those audited, had error rates of 10 percent or greater, but paperless privileges were not withdrawn from any of these filers. As a result, importers aware of FDA’s inaction could evade FDA’s inspections by incorrectly describing the contents of a shipment. Such intentional circumvention was demonstrated in 1993, when an importer was found guilty on 138 counts, mostly related to misrepresenting the source of seafood in an attempt to avoid FDA’s automatic detention. In addition to the problems associated with FDA’s system for selecting food shipments for inspection, several weaknesses in its controls over imported products enable some importers or their representatives to sell unsafe foods in the United States. Because of these weaknesses, some importers are able to (1) falsify laboratory test results on suspect foods to obtain FDA’s approval to release them into commerce, (2) sell potentially unsafe imported foods before FDA can inspect them, and (3) sell imported foods even when FDA has found a violation and prohibited entry. In addition to the absence of controls, violations are seldom punished effectively. In this environment, FDA has little assurance that contaminated products are kept off U.S. grocery shelves. With respect to falsifying laboratory test results, FDA’s system for automatically detaining suspicious products pending testing to confirm their safety may be easily subverted, because FDA does not maintain control over the testing process—importers are allowed to choose the laboratory that selects and tests the samples. In fiscal year 1997, FDA detained nearly 8,000 import shipments automatically because it had identified violations in previous shipments of related products. Most of these shipments, according to FDA, were released after importers presented their private laboratory test results showing that the shipments met U.S. standards. However, Customs and FDA officials are concerned over the reliability of private laboratories chosen by importers and hence the reliability of their test results. According to Customs inspectors, some importers, to ensure their products appear to meet U.S. requirements, share shipments that have already been tested and proven to be in compliance—a practice referred to as “banking.” FDA says it lacks the explicit authority to place restrictions on which laboratories importers can use to test products. Thus, FDA cannot control the selection of the samples tested nor insist on objective testing. FDA does not maintain control over products before releasing them into U.S. commerce, enabling importers to sell products before inspection or even after FDA has found a violation. Importers of FDA-regulated foods generally retain possession of import shipments until FDA releases them and must make the shipments available for FDA’s inspection if requested. At the ports we visited, imported shipments under FDA’s jurisdiction often entered U.S. commerce before being delivered to FDA for inspection or were not properly disposed of when refused entry. In Operation Bad Apple, which took place in San Francisco in 1997, Customs officials identified 23 weaknesses in controls over FDA-regulated foods. Importers’ practices to circumvent FDA’s controls included (1) ignoring FDA’s requests that shipments in violation be redelivered to Customs for disposition and (2) substituting cargo so that FDA inspectors would not see contaminated foods. In this investigation, Customs found that about 40 percent of the imported foods determined to violate U.S. standards were never redelivered to Customs for destruction or export, as required, and presumably entered domestic commerce. Moreover, when shipments were redelivered to Customs for destruction or export, Customs officials said other products had been substituted in about 50 percent of the shipments before redelivery. The results of this investigation are consistent with the findings in our 1992 report on pesticides, which found that 60 percent of the perishable foods and 38 percent of the nonperishable foods that FDA found to be adulterated with illegal pesticides were released into U.S. markets, or not returned to Customs for destruction or reexport as required. Customs and FDA officials recognize that this problem is occurring at other ports. In addition, there are few consequences for importers found to violate safety standards. Lacking the authority to fine importers who distribute adulterated food shipments or who fail to retain shipments for inspection, FDA relies on a bond agreement between Customs and the importer for most shipments as a way to achieve compliance. The bond amount is based on the importer’s declared value of the imported shipment, and damages (i.e., penalties) may be assessed against violators at up to 3 times the value of the bond. But such penalties are ineffective because Customs often does not collect full damages from importers that fail to comply with FDA’s requirements. For example, in fiscal year 1997, Customs in Miami assessed and collected damages for about only 25 percent of the identified cases involving the improper distribution of food products. Customs and FDA attributed the low figure to (1) laxity in communicating information about refused shipments between the agencies, (2) unclear guidance for Customs officials’ handling of the shipments, (3) a malfunction in the Customs computer system for storing case files, and (4) a halt in collections pending the resolution of a court case involving the collection of damages. Even when the damages were assessed, Customs only collected about 2 percent of the original assessment. In one case, Customs collected damages of $100 from one importer for not returning a shipment with a declared value of $100,000. According to Customs officials, any reduction in damages must be in accordance with Customs guidelines, and both Customs and FDA must agree to reduce the damages. In conclusion, Madam Chairman, we believe that it is vitally important that the nation’s efforts to ensure the safety of imported foods be improved. As the portion of the U.S. food supply from imported sources continues to grow, it is clear that the safety of the U.S. food supply cannot be ensured unless food imports are safe. However, our system for keeping unsafe imported foods from entering the food supply has a number of weaknesses. These weaknesses can and should be addressed. We have made a number of recommendations to this end in our recent report, and we hope to develop additional recommendations as part of our ongoing work for you. That concludes our prepared statement. We would be happy to respond to any questions you or members of the subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed: (1) findings from its recent report in which it pointed out how limitations in the Food and Drug Administration's (FDA) authority and approach for regulating imported foods adversely affect its ability to ensure food safety; (2) how FDA and the Food Safety and Inspection Service's (FSIS) procedures for selecting shipments to review result in the ineffective targeting of inspection resources; and (3) how weaknesses in FDA's and Customs Service's controls allows unscrupulous importers to market unsafe products. GAO noted that: (1) FDA lacks the legal authority to require that countries exporting foods to the United States have food safety systems equivalent to U.S. systems, an authority that FSIS has and uses to share the burden of ensuring safe foods with the exporting countries; (2) without such authority, FDA must rely primarily on its port-of-entry inspections, which covered less than 2 percent of shipments in 1997, to detect and bar unsafe foods; (3) such an approach has been widely discredited as an effective protective measure; (4) both FDA and FSIS could make better use of their inspection resources by using available health risk information to target shipments for inspection that pose the highest food safety risk; (5) FDA could further improve the use of resources by clarifying its communications to inspectors about which shipments to select and by taking enforcement action when importers are found to inaccurately describe the contents of shipments; (6) with such improvements, FDA could better ensure that it is using its scarce resources to identify the foods posing greater risks; (7) FDA's procedures for ensuring that unsafe imported foods do not reach U.S. consumers are vulnerable to abuse by unscrupulous importers; (8) under current procedures, FDA generally allows importers to retain control over shipments until the agency grants their release; (9) if importers move shipments into domestic commerce without a FDA release, FDA has no effective means of compelling importers to return the shipments for inspection, destruction, or reexport; (10) when FDA requires an importer to provide evidence that a suspect shipment is safe, the agency allows the importer to select the laboratory that picks the samples to be tested and that conducts the tests; and (11) FDA's and Custom's principal deterrent for ensuring that importers comply with U.S. requirements is uneven and uncertain.
NASA plans to finish assembling the ISS in fiscal year 2010 and operate the station until 2016. The station is scheduled to support 6-person crew capability as early as 2009. The shuttle was to be the primary means for ISS re-supply and crew rotation. NASA’s international partners were planning to augment the shuttle’s capabilities with their cargo and crew spacecraft. Following the Columbia disaster in 2003, the President set a new “vision” for NASA that called for the shuttle’s retirement in 2010 upon completing ISS assembly. As part of the Vision, NASA is developing new crew and cargo vehicles, with the crew vehicle currently scheduled to be available in the 2015 timeframe. One of the vehicles—the Crew Exploration Vehicle—will carry and support only crews traveling to low earth orbit and beyond and will also be capable of ferrying astronauts to and from the ISS. However, since these systems are not scheduled to become operational until 2015, NASA plans to rely on international partners and commercial providers to make up the 5-year gap in ISS logistics and crew rotation resulting from the shuttle retirement. As we have begun our review of ISS assembly, several issues related to NASA’s space shuttle manifest have come to our attention. First, the shuttle planning manifest dated January 2007 projects that NASA will launch 16 missions before retirement of the shuttle in 2010—one of those has already been launched. Of the 15 remaining missions, one will service the Hubble Telescope and 2 are designated as contingency missions. Assuming the contingency flights are included, on average, NASA will need to launch one shuttle every 2.7 months—an aggressive schedule when compared to recent launch timeframes. In the past, with three shuttles, NASA launched a shuttle every 3.7 months on average after the Challenger accident in 1986. Since the Columbia accident in 2003, NASA has averaged 10.8 months between launches. For the remainder of calendar year 2007, NASA has three launches planned, which will total four missions for the year. Due to vehicle traffic constraints, the minimum required time between shuttle launches to ISS is 35 calendar days, so while the manifest is aggressive, it is achievable. Additionally, the current shuttle manifest leaves little room for unexpected delays caused by weather damage or launch debris, which have proven to impact the shuttle launch schedule significantly. For example, in 2007, hail damage to the external fuel tank caused an unexpected three month delay in a shuttle launch. While there are limits to the planning NASA can do for such events, the tight schedule constraints leave little room for significant delays as a result of such occurrences. As evidence of the increasing pressure NASA is experiencing with regard to the shuttle manifest, the ISS program office is planning for certain cargo elements to be launched on the two final shuttle flights even though NASA, as an agency, still considers these flights contingency missions. NASA is also being forced to consider the possibility of canceling delivery of some portions of the ISS. Specifically, NASA determined that if the schedule slips, the Cupola observatory and the Node 3 connector built for hardware, oxygen and waste storage may be slipped to contingency flights. If that occurs and those flights do not launch, those elements may not be assembled on ISS as originally planned. Finally, NASA officials explained that since only the shuttle is large enough to deliver certain large Orbital Replacement Units (ORUs) to the ISS, they must be launched prior to retirement of the shuttle. These ORUs are replacement segments for those segments operating on the ISS that fail or reach the end of their life. The officials noted that NASA originally planned to use the shuttle to launch and retrieve certain large ORUs that are critical for ISS operations. After being brought back to Earth, the plan was to repair and refurbish the ORUs and return them to service on the ISS. However, with the shuttle no longer available to transport those ORUs after 2010, NASA changed its strategy for providing them to ISS from a refurbishment approach to a “launch and burn” approach. They suggested that under the new strategy, NASA would build enough ORUs to cover the ISS planned mission life and use them up over time. Large ORUs that originally were to be launched and returned on the shuttle would have to be pre-positioned on the ISS before the shuttle retires. There is still much to be worked out with NASA’s change in strategy for positioning ORUs to cover the space station's planned mission life. For example, the program office is still assessing the implications of restarting production lines to produce additional spares. This involves examining whether the right equipment, materials, expertise, and data is still available—an endeavor that the ISS program office acknowledged would be challenging. We will continue to monitor changes to the shuttle manifest as they occur. The space shuttle workforce currently consists of approximately 2,000 civil service and 15,000 contractor personnel. NASA must maintain a workforce with necessary critical skills to manage the shuttle program through its completion. In response to GAO recommendations, NASA has undertaken several initiatives to attempt to address its potential workforce drain. In 2005, we reported that NASA had made limited progress toward developing a detailed strategy for sustaining a critically skilled shuttle workforce to support space shuttle operations. We reported that significant delays in implementing a strategy to sustain the shuttle workforce would likely lead to larger problems, such as funding and failure to meet NASA program schedules. Accordingly, we concluded that timely action to address workforce issues is critical given their potential impact on NASA-wide goals such as closing the gap in human spaceflight. At the time we performed our work several factors hampered the ability of the Space Shuttle Program to develop a detailed long-term strategy for sustaining the critically skilled workforce necessary to support safe space shuttle operations through retirement. For example, the program’s focus was on returning the shuttle to flight, and other efforts such as determining workforce requirements were delayed. In our report, we recommended that NASA begin identifying the Space Shuttle Program’s future workforce needs based upon various future scenarios. Scenario planning could better enable NASA to develop strategies for meeting future needs. NASA concurred with our recommendation. The agency acknowledged that shuttle workforce management and critical skills retention will be a major challenge as it progresses toward retirement of the space shuttle and as such has acted to respond to our recommendation. For example, since we made our recommendation, NASA developed an agency wide strategic human capital plan and developed workforce analysis tools to assist it in identifying critical skills needs. NASA also developed a human capital plan specifically for sustaining the shuttle workforce through the retirement and, then transitioning the workforce. According to agency officials, currently NASA is mapping the available skills of the Space Shuttle workforce with the skills it will need for future work so that it can better plan and implement workforce reassignments. NASA’s senior leaders recognize the need for an effective workforce strategy in order to sustain the shuttle workforce through the shuttle’s retirement, which coincides with the completion of the ISS. Clear, strong executive leadership will be needed to ensure that the risks associated with the transition of the shuttle workforce are minimized. NASA has several options for filling the gap between the shuttle, which will retire in 2010 and new NASA-developed vehicles that are not expected to come on-line until 2015. The first relies on new vehicles developed within the U.S. commercial space sector. The second relies on vehicles developed by international partners—both new and legacy systems. There are considerable challenges with all options NASA is examining. NASA is working with the commercial space sector to develop and produce transport vehicles that can take equipment and ultimately crew to and from the space station during the gap between the space shuttle and the crew launch vehicle. Rather than buy these vehicles outright, NASA plans to help fund their development and purchase transportation services or perhaps even the vehicles themselves when they are needed. This program is known as Commercial Orbital Transportation Services (COTS). Currently, NASA has seven COTS agreements—all are in the initial phases of raising private funds for the development. NASA funding has been provided to two companies, Rocketplane Kistler (RpK) and Space Exploration Technologies (SpaceX). NASA has signed five more Space Act Agreements which facilitates sharing technological information, but these agreements are unfunded by NASA. There are two phases to the COTS program, the first phase entails technical development and demonstration and the second phase may include the competitive procurement of orbital transportation services for ISS logistical support. NASA officials noted that both RpK and SpaceX met their first milestone to demonstrate financial progress by obtaining private funding. However, RpK missed its second milestone in May 31, 2007 and had to renegotiate its Space Act Agreement milestone with NASA. The International Space Station Independent Safety Task Force (IISTF) reported in February 2007 that the design, development, and certification of the new COTS capability for ISS re-supply was just beginning. IISTF stated that, “if similar to other new program development activities, it most likely will take much longer than expected and will cost more than anticipated.” Our work has generally found space and other complex system development efforts—including NASA-sponsored efforts—often encounter schedule delays and technical problems when they are seeking to obtain significant advances in technologies, move forward amid changing requirements or with other unknowns, and/or are managed without adequate oversight, In our opinion, risks may be high in these partnerships, given that the suppliers do not have long-standing relationships with NASA or other government agencies and the development of the COTS vehicles represent totally new endeavors for most of these companies. As such, it will be exceedingly important for NASA to establish sound program management and oversight controls over these endeavors, establish clear and consistent guidance, limit requirements changes, and ensure it has adequate visibility into the progress being made by the COTS suppliers. Our review will examine the extent to which these measures are being taken. As you know, GAO has identified contract management as a high risk area for NASA. Actions designed to enhance program management and oversight are being implemented, but it may take years to complete them. This may make it even more difficult for NASA to successfully manage and oversee its relationship with the COTS suppliers. If NASA relies on these development efforts without adequate oversight, the programs could fall short of cost and schedule estimates, result in downgraded performance, and ultimately impact overall sustainment of the ISS. NASA has suggested that some supply activities during the gap can be conducted by vehicles under development or currently in operation by international partners—specifically, Europe, Japan and Russia—but these vehicles have constraints. Our ongoing review will assess these constraints in greater detail. To begin with, new vehicles being developed by the European and Japanese space agencies are very complex. Currently, the first test flight for the European vehicle is likely to happen in January 2008. The Japanese vehicle will not have its first operational flight until 2009. According to NASA officials, both the European and Japanese vehicle developments experienced technical hurdles and budgetary constraints, but both partners are committed to fulfilling their roles as partners in the ISS program. They do have confidence that the European vehicle will be available for ISS operations before retirement of the shuttle, but they are not as confident about the Japanese vehicle being ready by that time. NASA reliance on these vehicles to augment re-supply activities after 2010 assumes that further delays in their development will not occur. NASA’s expectation is that these vehicles will be developed in parallel with commercial developments. The agency’s preference is to use commercially developed vehicles, rather than rely on the vehicles developed by the international partners to cover the capability gap after retirement of the shuttle fleet. NASA also plans to continue working with Russia to provide crew and cargo support to the ISS, but this has been facilitated through an exemption to the Iran, North Korea and Syria Nonproliferation Act. Russian vehicles that were already operational were used to rotate crew and supply ISS during the period after the Columbia accident and a Russian Soyuz vehicle remains docked to the ISS continuously. The Iran, North Korea and Syria Nonproliferation Act exemption expires at the end of 2011, at which time any exchanges will be subject to the restrictions of the Act. However, if commercial development does not produce a usable vehicle by that date, the only vehicle that can support crew transportation is the Russian Soyuz spacecraft. According to NASA officials, the agency is planning to request a waiver to gain further exemption beyond 2011 if this situation occurs. Additionally, there are challenges related to sharing knowledge with international partners due to restrictions by the International Traffic in Arms Regulation (ITAR). This was highlighted by the International Space Station Independent Safety Task Force, and NASA has been working to address the concerns laid out in that study. Over the years, GAO has identified weaknesses in the efficiency and effectiveness of government programs designed to protect critical technologies while advancing U.S. interests. While each program has its own set of challenges, we found that these weaknesses are largely attributable to poor coordination within complex interagency processes, inefficiencies in program operations, and a lack of systematic evaluations for assessing program effectiveness and identifying corrective actions. However, in reviewing in the Joint Strike Fighter, another complex international system development effort, we also identified actions that could be taken early in programs to prevent delays and other problems related to ITAR. Our review going forward will assess the degree to which challenges in this area remain. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or the other members may have at this time. For further questions about this statement, please contact Cristina T. Chaplain at (202) 512-4841. Individuals making key contributions to this statement include James L. Morrison, Brendan S. Culley, Masha P. Pastuhov-Purdie, Keo Vongvanith and Alyssa B. Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the challenges faced by the National Aeronautics and Space Administration (NASA) on the International Space Station (ISS) and the Space Shuttle. NASA is in the midst of one of the most challenging periods in its history. As part of its Vision for Space Exploration, NASA is simultaneously developing a range of new technologies and highly complex systems to support future exploration efforts, completing assembly of the space station, and retiring the space shuttle. This is NASA's biggest transition effort since landing humans on the moon more than 3 decades ago and then initiating the Space Shuttle Program a few years later. Taken together, these efforts create significant challenges in terms of managing investments, launch and other facilities, workforce, international partners, and suppliers. Clearly, any delays or problems in completing and sustaining the space station itself, may well have reverberating effects on NASA's ability to ramp up efforts to develop technologies needed for future exploration or to support other important missions. GAO has undertaken a body of work related to NASA's transition efforts that include NASA's industrial supplier base, its workforce challenges, development of new crew and cargo spacecraft, and NASA's assembly and sustainment activities related to the ISS. This statement focuses on the preliminary results of on-going efforts, as well as other GAO work completed to date. Specifically, it will address the following challenges: (1) executing plans to use the shuttle to complete the ISS; (2) maintenance of the shuttle workforce through retirement of the shuttle; and (3) filling the gap between the shuttle and new NASA-developed vehicles to service the ISS. NASA's ability to overcome these challenges will be critical to ensuring the availability of the International Space Station as a viable research entity into the future. While these results and findings are preliminary, many have been echoed in other studies and identified by NASA itself. Our work is being conducted in accordance with generally accepted government auditing standards. NASA plans to finish assembling the ISS in 2010 and operate the station until 2016. The station is scheduled to support 6-person crew capability as early as 2009. The shuttle was to be the primary means for ISS re-supply and crew rotation. NASA's international partners were planning to augment the shuttle's capabilities with their cargo and crew spacecraft. Following the Columbia disaster in 2003, the President set a new "vision" for NASA that called for the shuttle's retirement in 2010 upon completing ISS assembly. As part of the Vision, NASA is developing new crew and cargo vehicles, currently scheduled to be available in the 2015 timeframe. One of the vehicles--the Crew Exploration Vehicle--will carry and support only crews traveling to low earth orbit and beyond and will also be capable of ferrying astronauts to and from the ISS. However, since these systems are not scheduled to become operational until 2015, NASA plans to rely on international partners and commercial providers to make up the 5-year gap in ISS logistics and crew rotation resulting from the shuttle retirement.
Oil is the largest single energy source used in the United States and remains perhaps the most visible energy source to most consumers. Oil, and the gasoline refined from it, provided the critical energy for the automobile that mobilized America. Oil remains at the center of the transportation sector and at the center of our national energy policy debate. In 2003, oil accounted for about 40 percent of the total U.S. energy consumption and the United States consumed about 7.3 billion barrels of crude oil—about 20 million barrels per day. Most oil is used in the transportation sector as gasoline, diesel, and jet fuel, with oil-based products accounting for over 98 percent of the U.S. transportation sector’s fuel consumption. In addition, oil is also used as a raw material in the manufacturing and industrial sectors; for heating in the residential and commercial sectors; and, in small amounts, for generating electric power. Although the United States accounts for about 5 percent of the world population, we consume about 25 percent of total world oil demand. Although today the United States and its industrialized counterparts currently account for the bulk of the world oil demand, demand is growing rapidly in the developing nations, especially those in Asia, such as China and India. The United States relies on imported oil for more than half of its supply and appears likely to increase its reliance in the future. Historically, the United States produced most of the oil it consumed. However, U.S. oil production began to decline in 1970 and has dropped by about 40 percent since then. Since 1970, imports of crude oil and other products have increased 255 percent, and imports now comprise nearly 56 percent of the U.S. oil supply. Part of the reason for the rising imports is cost; it has been less costly to purchase oil produced in other countries than it has been to produce it in the United States. Rising U.S. imports have increasingly been supplied by countries belonging to the Organization of Petroleum Exporting Countries (OPEC), which collectively provided about 42 percent of our total imports during 2003. Since about 20 percent of our imports came from the Persian Gulf region and 14 percent came from Saudi Arabia, our reliance on these imports has made the United States subject to the political instability of the Middle East witnessed in recent years. We also import a large amount of oil from our neighbors in North America; about 30 percent of our imported oil came from Canada and Mexico. Going forward, the United States will increasingly rely on imported oil because although the United States is currently the world’s third largest oil producer, U.S. proven oil reserves account for only about 2 percent of total world reserves. In contrast, OPEC holds about 68 percent of total world oil reserves. The prices of crude oil and refined petroleum products, such as gasoline and home heating oil, have been volatile over the years. Since the 1970s, the crude oil market has, at times, been heavily influenced by the OPEC cartel. Because the member countries control a large share of world production and total reserves, these countries have been able to influence crude oil prices by limiting supply through the use of country-by-country production quotas. These quotas have, at times, served to maintain a tight balance between world supply and world demand. However, because of the relative political instability in the Middle East and some of the other OPEC countries (such as Nigeria and Venezuela), occasional oil supply disruptions and price shocks have been a fact of life for about the past 30 years and may remain an issue for the foreseeable future. Although crude oil prices play a large role in determining the prices for gasoline and other refined petroleum products, other factors also influence the volatility of gasoline prices, including limited refinery capacity, low inventory levels relative to demand, supply disruptions, and regulatory factors—such as various gasoline formulations that are used to meet federal and state environmental laws. Federal and state taxes on gasoline and other products serve to raise the level of prices, but these taxes do not fluctuate often and so do not contribute to price volatility. Demand has pressed the limits of the production and delivery infrastructure in the oil industry in recent years. While U.S. crude oil production has fallen, rapidly rising imports have required more ocean tankers of crude oil to be off-loaded each year—forcing expansions of ocean crude oil terminals and coastal refineries. Because some refineries have closed, and no new ones have been built since 1976, there are fewer refineries available to convert crude oil into gasoline and other products. Although increases in overall output have been achieved through expanding capacity at the remaining refineries and operating those refineries at very high production levels, the nation’s domestic refining capacity has lagged overall demand growth for petroleum products. Further, the network of pipelines that delivers refined petroleum products also operates at high levels of capacity, sometimes limiting the amount of fuel that can be shipped. Finally, the capacity of gasoline terminals that distribute fuel to local gas stations is also limited in some parts of the country. Over the past 30 years, the federal government has undertaken many efforts designed to influence petroleum markets and demand for petroleum based fuels. For example, in the mid-1970s, the federal government developed the Strategic Petroleum Reserve, part of an international reserve effort designed to mitigate the economic impacts on world economies of any large, sustained disruption to the oil supply. In addition, the federal government has supported a number of research and development and regulatory efforts designed to reduce demand for petroleum fuels in transportation. For example, the federal government supported the Partnership for a New Generation of Vehicles in order to aid U.S. automobile manufacturers in developing gas-electric hybrid vehicles. In addition, the federal government has encouraged the development and deployment of technologies focused on identifying alternatives to petroleum-based fuels, such as the recent FreedomCAR initiative—a program to help develop fuel-cell technologies for vehicles. GAO has issued numerous reports on aspects of the petroleum sector, including gasoline markets and government efforts to reduce consumption of gasoline in vehicles among other areas. We also have reported on government efforts to improve gasoline vehicle efficiency through the use of gasoline-electric hybrid technologies and to shift vehicle fuel use to alternatives such as compressed natural gas or hydrogen-powered fuel cells. GAO has also noted that low gasoline prices do not reflect external costs associated with gasoline use, such as health and environmental impacts of air pollution or the economic cost that may result from the nation’s vulnerability to oil price shocks. Consequently, low gasoline prices work to discourage energy efficiency and the use of alternative fuels. Most recently we reported on the effects of mergers and market concentration in the U.S. petroleum industry, noting that mergers and increased market concentration that occurred in the mid-to-late 1990s contributed to higher wholesale gasoline prices—averaging about 1 to 2 cents per gallon. Other factors such as changes in gasoline formulations and supply disruptions may have also contributed to higher gasoline prices during this period. Later this year, GAO will release a primer on how gasoline is made and distributed, what factors influence the price of gasoline, and why gasoline prices change, among other things. In forthcoming work requested by the Congress, GAO will report on the presence of multiple fuel formulation requirements in some parts of the country and how the expansion of these fuels have affected prices. What are the potential implications for the United States of increased world reliance on oil supplies from politically unstable sources, such as OPEC countries? To what extent can the United States increase refining capacity and other delivery infrastructure to meet growing demand for petroleum products? What are the implications if there are further consolidations in the U.S. petroleum industry? Are there ways to better reflect the full societal cost of using gasoline in gasoline prices, and what are the trade-offs of doing so? Coal has been a key energy resource in the United States for over 100 years. Over this time, the use of coal has provided low-cost electricity but has brought with it environmental consequences, such as air pollution. Choices regarding the use of coal revolve around balancing these consequences, in the light of new technologies to reduce them, with the energy benefits of using this plentiful domestic resource. In 2003, coal accounted for about 23 percent of total U.S. energy consumption. Nearly all of the coal consumed in the United States, 92 percent, was used in the production of electricity, with almost all the remaining 8 percent used directly by industries such as steel manufacturing. Coal-fired power plants provided about half of total electricity generation in the United States in 2003, with larger shares in some parts of the country such as the mountainous West and the Midwest. Coal is expected to remain a vital element in the country’s energy supply; EIA’s most recent forecast indicates that coal would continue to provide about 20 percent of the country’s energy needs in 2025. The United States has substantial domestic coal resources, leading some to refer to the United States as “the Saudi Arabia of coal.” Nearly all of the coal used in the country is produced domestically. In 2003, using EIA data, estimates of recoverable U.S. coal reserves could last over 250 years, based on current usage. Coal is generally extracted from either surface, or underground mines, however underground coal also contains combustible gas, called coal bed methane, that can be removed using wells and burned to produce usable energy similar to conventional natural gas. Coal reserves are located across the country, with large reserves in the West, the Midwest, and the Appalachian Mountains, but consumption of coal from the West has increased sharply in recent years. A large portion of the coal reserves are located on federal lands and are subject to direct federal controls, such as payment of royalties, limits on the amount of federal land an individual company may mine, and requirements that surface land be restored to conditions similar to natural conditions when mining ends. Partly owing to the abundance of coal and technological improvements in the mining industry, coal prices have been declining in real terms since the mid-1970s. The production and use of coal have a variety of environmental consequences, including those related to mining and those related to the pollution that is emitted when coal is burned. Surface mining has the most significant impacts on land resources, in some cases substantially altering the terrain. Both surface and underground mines can significantly affect water resources by introducing pollution or silt into groundwater or waterways. Regarding air quality, combustion of coal in power plants emits pollutants and contributors to pollutants such as nitrogen oxides (NOx), sulfur oxides (SOx), particulate matter (PM), and toxic chemicals, such as mercury. Although some older power plants emit high levels of these substances, significant advancements have been made in the development of new power plants, utilizing new technologies that substantially reduce emissions. In addition to these pollutants, coal plants release a substantial amount of carbon dioxide, a gas that is common in nature but has been linked with the “greenhouse effect,” a greater-than- normal rise in the planet’s temperature. Although some countries have agreed to attempt to reduce emissions of carbon dioxide and other “greenhouse” gases, the United States does not currently regulate the emissions of such gases. However, DOE has supported research focused on developing a zero-emission coal-fired power plant that would not emit any pollutants or carbon dioxide into the air. In 2005, according to an industry policy group, 100 or more power plants featuring advanced technologies that substantially reduce emissions of pollutants are being considered for development in the United States. We have issued reports and testified on two primary coal related issues: technologies supported under DOE’s Clean Coal Technology program and the environmental consequences of using coal in power plants. Over the past several years, we have reported on the Clean Coal Technology program, noting that while DOE has reported successes in deploying new technologies, there have been management problems with the program and that there may be important lessons that should be considered in future similar efforts, such as the value of cost-sharing agreements and federal cost-sharing limits. We have also reported (1) that coal-fired power plants that have not been required to install modern pollution reducing equipment emit higher levels of pollutants such as NOx and SOx than plants where this equipment is present, and (2) that increased electricity generation in order to meet expected growth in demand may increase emissions of certain pollutants. In forthcoming work requested by the Congress, GAO will report on the effectiveness and cost of technologies to reduce mercury emissions, a toxic element present in coal that is emitted when coal is burned. How can the federal government balance the use of this abundant domestic energy source with its regulated and unregulated environmental consequences? Where will additional coal be mined, where will new power plants be located, and are additional infrastructure improvements needed? What is the potential role for coal bed methane, what are the trade-offs of extracting it, and what, if anything, should the federal government do to influence its development and production? What changes in controls, if any, should the federal government make to how coal can be mined on federal land and elsewhere? What role, if any, should the federal government play in providing incentives for using coal in ways that are safer for the environment? Natural gas, the fuel of choice recently, is one of the most versatile and widely used fuels—significant amounts are used as a raw material in the fertilizer, chemical, and other industries; for space heating in the industrial, commercial, and residential sectors; and for electricity generation. Until recently, prices have been low and use of natural gas for space heating and for electricity generation has expanded rapidly. Meeting the projected future growth of natural gas demand through delivering additional supply poses challenges. Natural gas plays a vital role in meeting the country’s national energy demand, accounting for about 23 percent of the total energy consumed in the United States. Use of natural gas has been growing rapidly since the mid-1980s, with consumption increasing by about 35 percent from 1986 through 2003. Natural gas demand has been the greatest in the industrial sector, accounting for about 37 percent of total demand in 2003; followed by the residential sector and electric power, each accounting for about 22 percent; then the commercial sector, at about 14 percent. The rest, about 3 percent, is used in the transportation sector, mostly as fuel for pipelines. A significant share of the increased demand in recent years has resulted from increased use of natural gas to generate electricity. This use has increased by 79 percent since the repeal of the Powerplant and Industrial Fuel Use Act in 1987, which had restricted construction of power plants using oil or natural gas as a primary fuel; natural gas is now the primary fuel in new power plants. EIA estimates that total natural gas demand could increase 50 percent in the next 25 years. Although natural gas prices remained low for many years, in recent years they have increased dramatically. From 1995 to 2004, average wellhead prices for natural gas increased nearly three-fold; rising from $1.55 per thousand cubic feet to $5.49 per thousand cubic feet. These higher prices for natural gas may have contributed to industrial companies reducing or ceasing U.S. operations. EIA data indicate that demand has fallen rapidly in the industrial sector, where consumption decreased by 16 percent from 1997 through 2003. Historically, almost all the natural gas used in the United States has been produced here, but a small and growing share is imported. Most natural gas production involves extracting gas from wells drilled into underground gas reservoirs, although some natural gas is generated as a by-product of oil production. In 2003, domestic sources provided about 85 percent of total consumption. Historically, most of the country’s natural gas came from Texas, Oklahoma, and Louisiana. However, the Rocky Mountain region, Alaska, and areas beneath the deeper waters of the Gulf of Mexico are becoming increasingly important in supplying natural gas. Overall, from 1994 through 2003, domestic annual production held steady at about 19 trillion cubic feet. In 2003, the United States imported about 15 percent of the total natural gas consumed, with nearly all of it coming from Canada via pipeline. However, a small share is shipped on special ocean tankers as liquefied natural gas (LNG) from countries such as Trinidad and Tobago, Nigeria, and others. Looking ahead, the Energy Information Administration estimates that U.S. consumption could increase to about 31 trillion cubic feet (TCF) by 2025, expanding the gap relative to U.S. production and requiring increasing imports to meet U.S. needs. The United States still has substantial undeveloped natural gas resources, but some of these resources are located under federal lands, and access to some of these resources is restricted. For example, about 40 percent of the natural gas resources on federal land in the Rocky Mountain region are not available for development. Additional natural gas reserves are located in federally controlled offshore areas or other areas and are not available for development at this time. Extensive drilling for natural gas can substantially modify the surrounding landscape, and in some cases can adversely affect wildlife and its habitat, degrade air and water quality, and decrease the availability of groundwater to ranches and houses that may depend upon it. The federal government is required to consider these environmental consequences when determining if, and how, natural gas will be extracted from federal lands. In response, the natural gas industry has and continues to use more advanced drilling methods and processes to mitigate future adverse impacts. Meeting the sharp increases forecast for natural gas demand could also require substantial increases in infrastructure, such as new pipelines and LNG terminals. In particular, increasing natural gas supplies may require greater pipeline capacity and new pipelines. For example, over the past 20 years the federal government has considered a variety of issues with financing and building a new pipeline across federal and state lands to deliver natural gas from Alaska. The federal government is involved in the regulation and permitting of natural gas pipelines, particularly those that must traverse federal lands. To meet the need for sharply higher imports of natural gas, some experts believe that the United States may need to build more LNG terminals. To date, however, such facilities have not been built due to economic, safety, and security concerns. Consequently, it is not clear whether the United States can effectively compete with other countries for these supplies. Over the last several years, we have issued a number of reports on natural gas, including reports on the natural gas markets and their oversight, various approaches for compensating the federal government when natural gas is removed from federal land, and the impacts of higher natural gas prices on certain industries. In 2002 and 2003, for example, we issued reports analyzing natural gas markets and their oversight. We noted that (1) prices generally increase because limited supplies have not been able to react quickly enough to changes in demand; (2) the federal government (e.g., the Federal Energy Regulatory Commission and EIA) faces significant challenges in overseeing natural gas markets and ensuring that prices are determined in a competitive and informed marketplace, minimizing unnecessary price volatility; and (3) buyers of natural gas have options to reduce their exposure to volatile prices through the use of long- term contracts and financial hedging instruments. In forthcoming work requested by the Congress, GAO will report on federal efforts to understand and manage risks associated with potential terrorist attacks on LNG shipments and other tankers. Should the federal government encourage further development of domestic natural gas on federal lands, and can it ensure that environmental impacts are adequately mitigated? What are the infrastructure needs of the natural gas industry, including natural gas pipelines generally and in Alaska in particular, and what role, if any, should the government play in facilitating the development of this infrastructure? What are the implications for consumers (residential, commercial, industrial, and electric power) of the increasing reliance on natural gas to generate electricity? What are the economic and other barriers and/or trade-offs to developing an infrastructure to support increases in LNG shipments, and what role, if any, should the federal government play? To what extent is the federal government positioned to ensure that natural gas prices are determined competitively? Nuclear energy was once heralded as the single answer to all of the country’s energy woes, with predictions that electricity would soon be “too cheap to meter.” While these enormous expectations have not been met, nuclear energy has become an important part of the country’s current energy picture and may remain that way for years to come. Whether we can continue to rely on, or expand our use of, nuclear energy in the future at existing plants or at new plants based on new designs, hinges on solving the long-term waste storage problem as well as resolving concerns over safety and security. Nuclear energy currently accounts for about 8 percent of U.S. national energy consumption. Nearly all nuclear energy is used to generate electricity, and nuclear plants are important contributors to total U.S. electricity production, providing about 20 percent in 2003. The first commercial nuclear power plant came on line in 1957, and the country witnessed a flurry of construction from the late 1960s through the 1980s. Many nuclear plants operating today were initially licensed for 40 years, and many are now approaching the end of their licenses. Since an accident at the Three Mile Island nuclear plant in 1979 raised concerns regarding the safety of nuclear plants, no new plants have been ordered in the United States, and none has been brought on line since 1996. In addition, many of the plants that were completed witnessed multibillion dollar cost overruns. Over the past several years, a number of nuclear generating units have been retired, but because the remaining 104 units have increased their productivity, the output actually increased by about 13 percent from 1998 through 2003. This increase in productivity has been impressive; the average annual capacity factor has increased from 71 percent in 1997 to 90 percent in 2004. These increases in productivity and other improvements have led some plant operators to seek to operate some plants at somewhat higher capacity. There appears to be renewed interest in extending the licenses of some existing plants and even building new plants. Interest in nuclear power plants has increased, in part, because they do not emit regulated air pollutants such as nitrogen oxides, sulfur dioxides, and particulate matter that can be costly to control, or carbon dioxide, a greenhouse gas, that many in the electricity industry believe might be regulated in the future. Given the improved performance, limited air emissions, and production cost advantages of nuclear power plants, some companies operating existing nuclear plants have already had them relicensed through the Nuclear Regulatory Commission (NRC) to operate for up to another 20 years, and others have started similar efforts. In addition, there have been trade industry reports that a number of utilities and other energy companies are actively considering submitting applications to build new plants. Over the past 20 years, plants have continued to be built overseas. New designs have emerged and foreign manufacturers have gained significant experience building them. Nuclear energy plays a large role in supplying energy in France, Germany, Canada, Japan, and other developed nations. Although nuclear plants remain very costly to build compared to some other plant types, they have lower fuel and other operating costs and can produce electricity at a lower cost than new plants that use fuels such as coal or natural gas—the primary energy source used in new U.S. power plants. In this country, NRC has approved new reactor designs and NRC and the Department of Energy are working to reduce the approval and construction lead times for potential new plants. Although the United States has a large domestic supply of uranium, the nation increasingly relies on international markets to obtain the nuclear fuel used here. Historically, the fuel used at U.S. reactors has been produced here. However, several factors have combined to reduce the competitiveness and capacity to domestically supply reactor fuel, including falling prices for reactor fuel on international markets and factors surrounding the 1998 privatization of the United States Enrichment Corporation (USEC). In response to the changes in the market, USEC closed the Portsmouth, Ohio, fuel plant leaving only the facility at Paducah, Kentucky, as the domestic source. Both France and Japan have advanced facilities that produce nuclear plant fuel, and these provide a large and growing share of international supplies, including those used in the United States. Although nuclear plants do not emit pollutants, they produce radioactive waste, including the highly radioactive waste that must be stored in isolation for thousands of years. The federal government committed to develop a permanent storage facility that would receive this waste by 1998, but delays have pushed the potential opening of the facility to the 2012 to 2015 time frame. Efforts to develop the facility have focused on storing the waste deep under Yucca Mountain in the desert north of Las Vegas, Nevada. In 2002, NRC reported that about 45,000 tons of spent fuel from nuclear plants was stored in the United States. Because the permanent repository has not been completed, the highly radioactive waste remains stored at power plants and other facilities and has been the subject of several lawsuits. Nuclear power plants have been operated safely, largely without incident. Nuclear power plants contain radioactive materials that if released could pose catastrophic risks to human health over an expansive area, but are designed and operated to avoid such an event and incorporate measures to protect the plant from attack. The Nuclear Regulatory Commission, among other things, oversees these plants, conducting periodic inspections of the plant equipment and evaluating security. However, since the terrorist attacks of September 11, 2001, nuclear plants have emerged as a key security concern and attention on these plants has increased. Industry expects that new plant designs will further reduce safety and security risks, incorporating features that, among other things, automatically cool the nuclear reaction. We have issued a number of reports dealing with aspects of nuclear energy covering three key areas: NRC’s oversight of safety issues at the existing nuclear plants; the development of a permanent storage facility for the highly radioactive waste produced by nuclear plants; and the potential vulnerability of these plants in light of the terrorist attacks of September 11. In May 2004, we issued a report on the discovery that corrosion had eaten a pineapple-sized hole in the nuclear reactor vessel head at the Davis-Besse power plant in Ohio that did not result in a radioactive release but highlighted problems with NRC’s inspections and oversight. We have issued a series of reports, spanning more than 20 years, that focus on various aspects of developing of a permanent nuclear waste storage facility. In 2002, we reported (1) that it would be premature for DOE to recommend the facility at Yucca Mountain to the President as a suitable repository for nuclear waste; (2) that DOE was unlikely to achieve its goal of opening a permanent storage repository at Yucca Mountain by 2010; and (3) that DOE did not have a reliable estimate of when, and at what cost, such a repository could be opened. We have also issued reports concerning the vulnerability of nuclear power plants to terrorist attacks. In September 2004, we testified that NRC was generally approving plants’ new security plans on the basis of limited details in the plans and without visiting the plants. In forthcoming work requested by the Congress, GAO will undertake a comprehensive review of NRC’s reactor oversight process and how NRC ensures that plants operate safely. GAO will continue to examine homeland security issues related to protecting commercial nuclear power plants from terrorist attacks. What role should nuclear energy continue to play in providing the nation’s energy needs in view of the aging of existing plants? Should new nuclear power plants be built in the United States, and can their design and construction make sense from a business standpoint while providing the safety and security assurances important to surrounding communities? How can existing and future nuclear waste generated by power plants be managed in an appropriate and timely manner? Are changes needed in how the industry and NRC ensure that plants are operated safely and securely, and is enough being done to protect nuclear plants from terrorist attacks? Electricity has emerged as one of the essential elements in modern life. Today, electricity lights our homes, enables our businesses to be more productive through the use of computers, and creates the basis for our modern quality of life, providing power for everything from our morning coffee to our nightly television news. Unlike the other types of energy that we have discussed—so-called primary sources of energy—electricity is generated through the use of the other energy sources (such as when natural gas is burned in power plants to generate electricity). Encouraged by the federal government, the electricity industry is in the midst of historic changes. Assessing that transition and determining whether the federal government can improve how electricity markets function remains a focus for federal policy. Electricity use has grown steadily in recent years. From 1980 through 2003, the quantity of electricity sold increased by 75 percent, with the largest increases coming in the residential and commercial sectors. Electricity is used in these sectors for space heating and for cooling, lighting, and operating small appliances, such as computers and refrigerators. Industrial consumption declined slightly over this period, reflecting the contraction of manufacturing, including some large industrial users of electricity such as the aluminum and steel industries. In 2003, over 70 percent of electricity was generated using fossil fuels, with over 50 percent coming from coal-fired power plants, about 16 percent from natural gas, and small amounts from petroleum and other fossil fuels. In recent years, new power plants have predominantly relied on natural gas. Nuclear energy provides about 20 percent of electricity generation, hydroelectric energy provides about 7 percent, and a variety of renewable resources, such as wind turbines, provide the remainder. The federal government has a direct role in supplying electricity, through the federally controlled Power Marketing Administrations, which market electricity produced by federally owned dams and other power plants and which own an extensive transmission network to deliver that electricity. These entities initially aided in the federal mission to bring electricity to rural areas; however, most now serve major metropolitan areas, in addition to some rural customers. Historically, electricity has been produced and delivered by local monopoly utilities within a specific area, but this has been changing. The electricity sector is restructuring to foster more competition and provide an increased role for open markets. Competition is already under way for the wholesale markets that the federal government regulates. To facilitate fair wholesale competition, the federal government has also pressed for change in what entities control transmission lines—by approving the creation of independent transmission operators to take the place of utilities in performing this function. Some states, such as California and Pennsylvania, had also moved to introduce competition to state-regulated retail markets, where most consumers obtain their electricity. Although the electricity industry is restructuring to include a greater role for competition, the federal government still oversees wholesale electricity markets through the Federal Energy Regulatory Commission (FERC). Because federal actions have restructured wholesale markets nationwide and states have variously chosen to restructure the markets that they oversee, the national electricity market is currently a hybrid, somewhere between competitive and regulated. Unlike the other forms of energy, the amount of electricity supplied by power plants must be balanced, on a second-to-second basis, with the amount of electricity consumed in homes and businesses. To do this, utilities or independent entities direct the production of electricity and its movement over transmission lines to avoid blackouts. In some cases, such as in California in 2000 and 2001 and more recently in the Northeast in 2003, the balance between supply and demand was disrupted and blackouts occur. Electricity demand is projected to increase by at least 36 percent by 2025, and the industry may require significant investment in power plants and transmission lines to reach those levels. The National Energy Policy Development report estimated that the United States may need to add as many as 1,900 power plants to meet forecasted demand growth. In addition, because the existing network of power lines frequently experiences congestion, the capacity of many key transmission lines may need to be increased to move electricity from these new plants and improve the reliability of the existing system. We have reported on the development of competition in the electricity industry and evaluated the oversight of electricity markets. For example, in one report we found that the way the market was structured in California enabled some electricity sellers to manipulate prices. We also reported on the ability to add new power plants in three states, concluding that the success of restructured markets hinged on private investment in power plants and that this investment was reduced by higher levels of perceived risk in some markets, such as in California. Further, we recently reported on the potential value of empowering consumers to manage their own electricity energy demand in order to save money and improve the functioning of these markets. Allowing consumers to see electricity prices enables them to reduce their usage when prices are high—reducing their energy bills and improving the functioning of the markets. Following the 2003 blackout, we issued a report that highlighted challenges and opportunities in the electricity industry, including whether reliability standards should be made mandatory and whether control systems critical to the electricity industry have adequate security. Regarding oversight of electricity markets, we reported that while the Federal Energy Regulatory Commission has made progress in revising its oversight strategy, it still faced challenges in better regulating these markets. In forthcoming work requested by the Congress, GAO will assess progress in reporting electricity market transactions for use in developing market indexes and the adequacy of controls over this reporting. To what extent does the division of regulatory authority between the federal government and the states limit the electricity industry’s ability to achieve the benefits expected from the introduction of competition in electricity markets? What changes are necessary to federal and state monitoring and oversight of electricity markets to ensure that they are adequately overseen? Will FERC’s actions to promote reliability be sufficient, or will additional actions be needed to improve compliance with reliability rules? How does continued uncertainty about how the future of electricity restructuring and electricity markets affect electricity companies, investment in new plants and transmission lines, and consumer prices? What role should the federal Power Marketing Administrations play in restructured electricity markets? To what extent are homeland security principles being integrated into new electricity infrastructure and business processes? Renewable energy sources, such as hydroelectric dams, ethanol, wind turbines, and geothermal and solar applications, currently comprise a small percentage of the total energy resources consumed in the United States. Several alternative sources, such as hydrogen and fusion power, may offer potential long-term promise, but research remains at an early stage. While these renewable and alternative energy sources have a nearly unlimited domestic supply, are perceived as relatively clean, and help diversify the U.S. energy supply, technical problems and high costs relative to other options have limited their use. According to EIA, in 2003 renewable and alternative energy sources accounted for slightly more than 6 percent of the total U.S. energy consumption. Hydropower is the largest single source in this category and makes up over 45 percent of all renewable and alternative energy consumed. Hydropower generation, which varies due to weather conditions, has fluctuated at about the same level since the 1970s. Wood accounts for about 34 percent of total renewable energy, although its use has declined since 1989. Waste and other byproducts, such as municipal solid waste, landfill gas, and biomass, account for about 9 percent and their use has been relatively flat since the mid-1990s. Geothermal energy use has decreased slightly since it peaked in 1993 and now accounts for about 5 percent of the total. Alcohol fuels, such as ethanol, make up about 4 percent of the total, but their use has increased rapidly in recent years, almost doubling from 1999 through 2003. Wind energy accounted for about 2 percent of the total renewable energy consumed in 2003 but has witnessed substantial and persistent growth in recent years, more than tripling from 1998 through 2003. Solar energy accounts for about 1 percent of all renewable and alternative energy consumed, and its use has declined slightly but steadily since 1997, although use of some specific solar technologies such as photovoltaic solar cells that convert sunlight directly into electricity has grown in recent years. Renewable energy technologies are increasingly becoming part of global markets and are, in some cases, owned by large multinational energy companies such as oil companies. Solar and wind energy have grown substantially in these markets, but remain at relatively low levels in the United States. Growth in wind power has benefited from improvements in wind turbine technology and the availability of government tax credits here and overseas, both of which have improved the competitiveness of wind power technologies with more traditional forms of energy. EIA estimates, however, that if the federal government removes the tax credit, the U.S. growth in the generation of wind power will almost stop. However, EIA estimates that if the government maintains the tax credit, wind power generation in the United States is expected to grow nearly seven-fold over the next 20 years. Solar technologies, especially solar cell technologies that produce electricity, have supplanted traditional technologies, such as generators for some remote applications, and sales of solar cells have expanded rapidly worldwide, albeit from a small base. Several alternative sources may offer long-term promise, although they are not ready for widespread application. Technologies such as hydrogen power and fusion are currently being developed as new sources of energy. While these technologies have the potential to deliver large amounts of energy with fewer environmental impacts than traditional energy sources, they cannot be counted upon to deliver significant amounts of energy in the near future due to significantly higher costs and technical challenges. To date, use of hydrogen fuel cells still requires the extraction of hydrogen from another fuel source, such as natural gas, and currently this extraction is too costly to compete with other sources of energy. In addition, the infrastructure to support hydrogen power has not been built. While fusion also may have the ability to provide an abundant and clean energy source, research on this technology remains at a very early stage. We have issued several reports describing the viability and technical progress of several renewable and alternative energy sources supported by the federal government. A continuing theme of these reports has been that when the government invests money into research and development initiatives, it is important to keep one eye on the technical goals and one eye on the marketplace. We have noted that the success of the investment should be measured by its contribution to increasing the use and feasibility of an energy source, rather than reaching specific technical research and development goals. In forthcoming work requested by the Congress, GAO will report on the impact of wind turbines on birds and other aspects of the environment, as well as geothermal energy development in the United States. Should the federal government establish clear and measurable goals for the development and use of renewable and alternative energy sources, and, if so, how should progress toward these goals be measured? What should the federal government’s role be in researching and developing existing and future sources of renewable and alternative energy sources? What are the costs and benefits of increasing our use of renewable and alternative energy sources? What are the implications of renewable energy mandates for deploying renewable energy technologies and for electricity markets? Experts have long contended that energy strategies that reduce demand can cost less, be brought on line faster, and provide greater environmental benefits compared to strategies that increase the amount of energy supplied—particularly if demand reductions decrease fossil fuel consumption and related pollution. Such strategies include improving the efficiency of energy we already use and allowing consumers to choose when it makes the most sense to conserve energy. Despite their advantages, however, opportunities to improve efficiency and consumer choice are often overlooked. Overall, energy demand in the United States has trended steadily upward for the last 50 years. While demand has increased, the amount of energy the country uses relative to its economic output has fallen. The amount of energy used for each dollar of gross domestic product has dropped by about half from 1970 through 2003. The reduction has been even more striking when examining the industrial sector, where energy used per dollar of GDP has fallen by over 60 percent since 1970. It is not clear whether this reduction reflects a decrease in energy intensive industries, such as aluminum and steel manufacturing, improvements in energy efficiency, or some combination of the two. The federal government has, periodically, made efforts to reduce demand, encourage energy efficiency, or both. To reduce demand, the federal government has, among other things, encouraged consumers to voluntarily limit excessive heating and cooling of homes and to reduce the number of miles that they drive. To encourage energy efficiency, the federal government has established energy efficiency standards for such things as home appliances, air conditioners, and furnaces, as well as provided incentives for purchasing energy-efficient equipment. In the transportation sector, the federal government has required automakers to meet overall efficiency standards—known as Corporate Average Fuel Economy (CAFÉ) standards—for the vehicles they sell. The federal government has also made investments to improve energy efficiency and save money on energy at its own buildings through the Federal Energy Management Program and utilizing energy savings performance contracts. Federal efforts have met with some success. According to the American Council for an Energy Efficient Economy and the Alliance to Save Energy, energy efficiency investments made from 1973 through 2003 saved the equivalent of 40 to 50 quadrillion BTUs of energy in 2003, equal to about 40 to 50 percent of total energy consumption and more than any single fuel provided. Several organizations, including a panel of several national laboratories, estimate that many opportunities for additional improvements in energy efficiency remain untapped. At times, however, federal efforts to reduce energy demand and improve energy efficiency have had to compete with efforts to keep energy prices low. For example, residential and commercial sectors of the economy have until recently been somewhat protected from price volatility by regulated prices for electricity and natural gas and thus have been less likely to reduce their consumption of these sources. Moreover, inflation- adjusted energy prices have generally declined, until recently. Reducing demand when prices are falling has been difficult for several reasons. For example, because energy-consuming equipment, such as air conditioners, furnaces, and lighting systems, is generally costly to purchase and lasts many years, consumers do not want to replace it unnecessarily. In addition, consumers are often not aware of the energy inefficiency of their homes and businesses. Falling energy prices have also made it more difficult to demonstrate the cost-effectiveness of spending money to replace aging and inefficient equipment, particularly for residential and commercial customers. In contrast, when consumers face prolonged period of higher energy prices, they are more likely to identify and adopt cost-effective strategies for reducing their energy demand. For example, following prolonged supply disruptions and price increases for gasoline in the 1970s, consumers in the 1980s chose to purchase more fuel-efficient vehicles, pushing up overall fuel efficiency averages nationwide. In the late 1990s the opposite has been true; relatively low prices for gasoline have encouraged consumers to choose to purchase larger and less fuel-efficient vehicles. GAO has examined policies designed to reduce demand in electricity markets, as well as efforts to develop more fuel-efficient automobiles. In August 2004, we issued a report finding that electricity demand programs that better link the electricity prices consumers pay with the actual cost of generating electricity offer significant financial benefits to consumers, improve the functioning of electricity markets, and benefit the federal government by lowering its utility bills. In March 2000, we reported on the Partnership for a New Generation of Vehicles (which sought to develop a family sedan that could drive about 80 miles on a gallon of fuel) and found that the vehicle being developed did not match consumer vehicle preferences and that automakers would not be manufacturing such a vehicle for U.S. markets. In forthcoming work requested by the Congress, GAO will evaluate the Department of Energy’s program for setting energy efficiency standards for appliances. What are the benefits and costs of potential federal efforts to reduce energy demand? Are there economic, regulatory, or other barriers preventing the adoption of cost-effective, energy-efficient technologies that could meet consumer needs? Are there promising energy-saving technologies that are nearly cost- effective that the federal government should consider encouraging through the use of consumer incentives? Are there emerging energy-efficiency technologies that are past basic research but that could benefit from federal and industry collaboration? Which technologies offer the greatest long-term potential for reducing demand, and should they be considered for intensive federal research? To what extent are retail price structures impeding the deployment of cost-effective and energy-efficiency technologies? Given the increasing signs of strain on our energy systems and our growing awareness of how our energy choices impact our environment, there is a growing sense that federal leadership could provide the first step in a fundamental reexamination of our nation’s energy policies. As the Congress, executive agencies, states and regions, industry, and consumers weigh such a reexamination, we believe that it makes sense to consider all energy sources together, along with options to encourage more efficient energy use and consumer choices to save energy. While a balanced energy portfolio is needed, striking that balance is difficult because of sometimes competing energy, environmental, economic, and national security needs. Clearly none of the nation’s energy options are without problems or trade- offs. Current U.S. energy supplies remain highly dependent on fossil energy sources that are either costly, imported, potentially harmful to the environment, or some combination of these three, while many renewable energy options still remain more costly than traditional options. On the other hand, past efforts to reduce energy demand appear to have lost some of their effectiveness in recent years. Striking a balance between efforts to boost supplies from these various energy sources and those focused on reducing demand presents challenges as well as opportunities. In the end, the nation’s energy policies come down to choices. Just as they did some 30 years ago in the aftermath of the major energy crises of the 1970s, congressional choices will strongly influence the direction that this country takes regarding energy issues—affecting consumer, supplier, and investor choices for years to come. Consumer choices made from today forward will determine to a great extent how much energy will be needed in the future. In the same way, energy suppliers have choices about how much of each type of energy to provide, based increasingly on their interaction with competitive domestic and sometimes global markets for energy. Choices made by consumers and suppliers will be influenced by state and local entities, along with regional stakeholders in some areas of the country, which have authority over key decisions that affect such things as the siting of generation and transmission facilities as well as access to their lands. Similarly, investors have choices regarding where to invest their money, whether in new power plants, refineries, research and development for new technologies, or outside the energy sector all together. Yet, many of these choices may be significantly influenced, or even overshadowed, by broader forces that are beyond our control, such as expected energy demand growth in the developing world. In closing, providing the American consumer with secure, affordable, reliable, and environmentally sound energy choices will be a challenge. I would like to note that more than 30 years ago, during the first energy crisis, our nation faced many of the same choices that we are confronting today. How far have we come? Have we charted a course that can be sustained in the 21st century? In 30 years, will we again come full circle and ask ourselves these same questions about our energy future? The answer to this final question lies in our collective ability to develop and sustain a strategic plan, with supporting incentives, along with a means to measure our progress and periodically adjust our path to meet future energy challenges. I would be pleased to respond to any questions that you, or other Members of the Subcommittee, may have at this time. For further information about this testimony, please contact me, Jim Wells, at (202) 512-3841. Contributors to this testimony included Godwin Agbara, Dennis Carroll, Mark Gaffigan, Dan Haas, Mike Kaufman, Bill Lanouette, Jon Ludwigson, Cynthia Norris, Paul Pansini, Ilene Pollack, Melissa Roye, Frank Rusco, and Ray Smith. Energy Markets: Effects of Mergers and Market Concentration in the U.S. Petroleum Industry. GAO-04-96. Washington, D.C. May 17, 2004. Research and Development: Lessons Learned from Previous Research Could Benefit FreedomCAR Initiative. GAO-02-810T. Washington, D.C.: June 6, 2002. U.S. Ethanol Market: MTBE Ban in California. GAO-02-440R. Washington, D.C.: February 27, 2002. Motor Fuels: Gasoline Prices in the West Coast Market. GAO-01-608T. Washington, D.C.: April 25, 2001. Motor Fuels: Gasoline Prices in Oregon. GAO-01-433R. Washington, D.C.: February 23, 2001. Petroleum and Ethanol Fuels: Tax Incentives and Related GAO Work. RCED-00-301R. Washington, D.C.: September 25, 2000. Cooperative Research: Results of U.S.-Industry Partnership to Develop a New Generation of Vehicles. RCED-00-81. Washington, D.C.: March 30, 2000. Alaskan North Slope Oil: Limited Effects of Lifting Export Ban on Oil and Shipping Industries and Consumers. RCED-99-191. Washington, D.C.: July 1, 1999. International Energy Agency: How the Agency Prepares Its World Oil Market Statistics. RCED-99-142. Washington, D.C.: May 7, 1999. Energy Security and Policy: Analysis of the Pricing of Crude Oil and Petroleum Products. RCED-93-17. Washington, D.C.: March 19, 1993. Energy Policy: Options to Reduce Environmental and Other Costs of Gasoline Consumption. T-RCED-92-94. Washington, D.C.: September 17, 1992. Energy Policy: Options to Reduce Environmental and Other Costs of Gasoline Consumption. RCED-92-260. Washington, D.C.: September 17, 1992. Alaskan Crude Oil Exports. T-RCED-90-59. Washington, D.C.: April 5, 1990. Energy Security: An Overview of Changes in the World Oil Market. RCED- 88-170. Washington, D.C.: August 31, 1988. Clean Air Act: Observations on EPA’s Cost-Benefit Analysis of Its Mercury Control Options. GAO-05-252. Washington, D.C.: February 28, 2005. Fossil Fuel R&D: Lessons Learned in the Clean Coal Technology Program. GAO-01-854T. Washington, D.C.: June 12, 2001. Clean Coal Technology: Status of Projects and Sales of Demonstrated Technology. RCED-00-86R. Washington, D.C.: March 9, 2000. Natural Gas: Domestic Nitrogen Fertilizer Production Depends on Natural Gas Availability and Prices. GAO-03-1148. Washington, D.C.: September 30, 2003. Natural Gas Flaring and Venting: Opportunities to Improve Data and Reduce Emissions. GAO-04-809. Washington, D.C.: July 14, 2004. Natural Gas: Analysis of Changes in Market Price. GAO-03- 46. Washington, D.C.: December 18, 2002. Energy Deregulation: Status of Natural Gas Customer Choice Programs. RCED-99-30. Washington, D.C.: December 15, 1998. Nuclear Regulation: NRC’s Assurances of Decommissioning Funding During Utility Restructuring Could Be Improved. GAO-02- 48. Washington, D.C.: December 3, 2001. Nuclear Waste: Technical, Schedule, and Cost Uncertainties of the Yucca Mountain Repository Project. GAO-02-191. Washington, D.C.: December 21, 2001. Nuclear Nonproliferation: Implications of the U.S. Purchase of Russian Highly Enriched Uranium. GAO-01-148. Washington, D.C.: December 15, 2000. Nuclear Regulation: Better Oversight Needed to Ensure Accumulation of Funds to Decommission Nuclear Power Plants. RCED-99-75. Washington, D.C.: May 3, 1999. Nuclear Waste: Impediments to Completing the Yucca Mountain Repository Project. RCED-97-30. Washington, D.C.: January 17, 1997. Renewable Energy: Wind Power’s Contribution to Electric Power Generation and Impact on Farms and Rural Communities. GAO-04- 756. Washington, D.C.: September 3, 2004. Geothermal Energy: Information on the Navy’s Geothermal Program. GAO-04-513. Washington, D.C.: June 4, 2004. Department of Energy: Solar and Renewable Resources Technologies Program. RCED-97-188. Washington, D.C.: July 11, 1997. Energy Policy: DOE’s Policy, Programs, and Issues Related to Electricity Conservation. RCED-97-107R. Washington, D.C.: April 9, 1997. Energy Markets: Additional Actions Would Help Ensure That FERC’s Oversight and Enforcement Capability Is Comprehensive and Systematic. GAO-03-845. Washington, D.C.: August 15, 2003. Energy Markets: Concerted Actions Needed by FERC to Confront Challenges That Impede Effective Oversight. GAO-02-656. Washington, D.C.: June 14, 2002. Petroleum and Ethanol Fuels: Tax Incentives and Related GAO Work. RCED-00-301R. Washington, D.C.: September 25, 2000. Electricity Markets: Consumers Could Benefit from Demand Programs, but Challenges Remain. GAO-04-844. Washington, D.C.: August 13, 2004. Electricity Restructuring: 2003 Blackout Identifies Crisis and Opportunity for the Electricity Sector. GAO-04-204. Washington, D.C.: November 18, 2003. Air Pollution: Meeting Future Electricity Demand Will Increase Emission of Some Harmful Substances. GAO-03-49. Washington, D.C.: October 30, 2002. Electricity Markets: FERC’s Role in Protecting Consumers. GAO-03- 726R. Washington, D.C.: June 6, 2003. Electricity Restructuring: Action Needed to Address Emerging Gaps in Federal Information Collection. GAO-03-586. Washington, D.C.: June 30, 2003. Lessons Learned From Electricity Restructuring: Transition to Competitive Markets Underway, but Full Benefits Will Take Time and Effort to Achieve. GAO-03-271. Washington, D.C.: December 17, 2002. Restructured Electricity Markets: California Market Design Enabled Exercise of Market Power. GAO-02-828. Washington, D.C.: June 21, 2002. Air Pollution: Emissions from Older Electricity Generating Units. GAO- 02-709. Washington, D.C.: June 12, 2002. Energy Markets: Concerted Actions Needed by FERC to Confront Challenges That Impede Effective Oversight. GAO-02-656. Washington, D.C.: June 14, 2002 Restructured Electricity Markets: Three States’ Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002. California Electricity Market: Outlook for Summer 2001. GAO-01-870R. Washington, D.C.: June 29, 2001. California Electricity Market Options for 2001: Military Generation and Private Backup Possibilities. GAO-01-865R. Washington, D.C.: June 29, 2001 Energy Markets: Results of Studies Assessing High Electricity Prices in California. GAO-01-857. Washington, D.C.: June 29, 2001. Electric Utility Restructuring: Implications for Electricity R&D. T- RCED-98-144. Washington, D.C.: March 31, 1998. Mineral Revenues: A More Systematic Evaluation of the Royalty-in-Kind Pilots Is Needed. GAO-03-296. Washington, D.C.: January 9, 2003. Alaska’s North Slope: Requirements for Restoring Lands After Oil Production Ceases. GAO-02-357. Washington, D.C.: June 5, 2002. Royalty Payments for Natural Gas From Federal Leases in the Outer- Continental Shelf. GAO-01-101R. Washington, D.C.: October 24, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Plentiful, relatively inexpensive energy has been the backbone of much of modern America's economic prosperity and the activities that essentially define our way of life. The energy systems that have made this possible, however, are showing increasing signs of strain and instability, and the consequences of our energy choices on the natural environment are becoming more apparent. The reliable energy mainstay of the 20th century seems less guaranteed in the 21st century. As a nation, we have witnessed profound growth in the use of energy over the past 50 years--nearly tripling our energy use in that time. Although the United States accounts for only 5 percent of the world's population, we now consume about 25 percent of the energy used each year worldwide. Looking into the future, the Energy Information Administration (EIA) estimates that U.S. energy demand could increase by about another 30 percent over the next 20 years. To aid the subcommittee as it evaluates U.S. energy policies, GAO agreed to provide its views on energy supplies and energy demand as well as observations that have emerged from its energy work. This testimony is based on GAO's published work in this area, conducted in accordance with generally accepted government auditing standards, and on EIA's Annual Energy Review, 2003 and its Annual Energy Outlook, 2005. America's demand for energy has, in recent decades, outpaced its ability to supply energy. As a result, the country has witnessed rapid price increases and volatility in some markets, such as gasoline, and reliability problems in others, such as electricity, where the blackout in 2003 left millions in the dark. Given these recent and sometimes persistent problems, as well as concerns about the impacts of energy consumption on air, water, and other natural resources, there is a growing sense that action is needed. Today, fossil fuels (coal, oil, and natural gas) provide about 86 percent of our total energy consumption, with the rest coming from nonfossil sources such as nuclear (8 percent) and renewables, such as hydroelectric energy and wind power (6 percent). Overall, the majority of the nation's energy consumption is met by domestic production. However, imports of some fuels have risen. For example, over the past 20 years, imports--primarily oil and natural gas--have doubled, and in 2003 these imports comprised about one-third of total domestic energy consumption. Imports are expected to increase still further in order to meet future domestic consumption. In light of the current and expected levels of imports, the United States is, and will increasingly be, subject to global market conditions, with the transportation sector especially affected. Global markets may face future difficulties in meeting the growing energy demands of developed nations while also meeting the demands of the developing world, particularly considering the explosive growth in some economies, such as China's and India's. If world supplies for some fuels do not keep pace with world demand, energy prices could rise sharply. GAO believes that a fundamental reexamination of the nation's energy base and related policies is needed and that federal leadership will be important in this effort. To help frame such a reexamination, we offer three broad crosscutting observations. First, regarding demand, the amount of energy that needs to be supplied is not fate, but our choice. Consumers, whether businesses or individuals, choose to use energy because they want the services that energy provides, such as automated manufacturing and advanced computer technologies. Accordingly, consumers can play an important role in using energy wisely, if encouraged to adjust their usage in response to changes in prices or other factors. Second, all of the major fuel sources--traditional and renewable--face environmental, economic, or other constraints or trade-offs in meeting projected demand. Consequently, all energy sources will be important in meeting expected consumer demand in the next 20 years and beyond. Third, whatever federal policies are chosen, providing clear and consistent signals to energy markets, including consumers, suppliers, and the investment community, will help them succeed. Such signals help consumers to make reasoned choices about energy purchases and give energy suppliers and the investment community confidence that policies will be sustained, reducing investment risk.
Employer-sponsored pensions fall into two major categories: defined benefit (DB) and defined contribution (DC) plans. In DB, or traditional, plans, benefits are typically set by formula, with workers receiving benefits upon retirement based on the number of years worked for a firm and earnings in years prior to retirement. In DC plans, workers accumulate savings through contributions to an individual account. These accounts are tax-advantaged in that contributions are typically excluded from current income, and earnings on balances grow tax-deferred until they are withdrawn. An employer may also make contributions, either by matching employee’s contributions up to plan or legal limits, or on a non- contingent basis. Like DB plans, DC plans operate in a voluntary system with tax incentives for employers to offer a plan and for employees to participate. Contributions to and earnings on DC plan accounts are not taxed until the participant withdraws the money, although participants making withdrawals prior to age 59 ½ may incur an additional 10 percent tax. In 2006, the pension tax expenditure for DC plans amounted to $54 billion. In addition, a nonrefundable tax credit to qualifying low-and middle-income workers who make contributions, the saver’s credit, accounted for less than 2 percent of the 2006 tax expenditure on account-based retirement plans. DC plans offer workers more control over their retirement asset management, but also shift some of the responsibility and certain risks onto workers. Workers generally must elect to participate in a plan and make regular contributions into their plans over their careers. Participants typically choose how to invest plan assets from a range of options provided under their plan, and accordingly face investment risk. Savings in DC plans are portable in the sense that a participant may keep plan balances in a tax-protected account upon leaving a job, either by rolling over plan balances into a new plan or an IRA, or in some cases leaving money in an old plan. Workers may have access to plan savings prior to retirement, either through loans or withdrawals; participants may find such features desirable, but pre-retirement access may also lead to lower retirement savings (sometimes referred to as leakage) and possible tax penalties. Workers who receive DC distributions in lump-sum form must manage account withdrawals such that their savings last throughout retirement. In contrast, a formula, often based on preretirement average pay and years of service, determines DB plan benefits, and workers are usually automatically enrolled in a plan. The employer has the responsibility to ensure that the plan has sufficient funding to pay promised benefits, although the sponsor can choose to terminate the plan. DB plans also typically offer the option to take benefits as a lifetime annuity, or periodic benefits until death. An annuity provides longevity insurance against outliving one’s savings, but may lose purchasing power if benefits do not rise with inflation. Table 1 summarizes some of the primary differences between DC and DB plans. Over the past 25 years, DC plans have become the dominant type of private sector employee pension. In 1980, private DB plans had 38 million participants, while DC plans had 20 million. As of 2004, 64.6 million participants had DC plans, while 41.7 million had DB plans. Further, over 80 percent of private sector DC participants in 2004 were active participants (in a plan with their current employer), while about half of DB participants had separated from their sponsoring employer or retired. According to the Employee Benefit Research Institute (EBRI), while overall pension coverage among families remained around 40 percent between 1992 and 2001, 38 percent of families with a pension relied exclusively on a DC plan for retirement coverage in 1992, while 62 percent had a DB plan. In 2001, 58 percent of pension-participating families had only a DC plan, while 42 percent had a DB plan. Assets in all DB plans exceeded total DC assets as recently as 1995. As of 2006, DC plans had almost $3.3 trillion in assets while DB plans had almost $2.3 trillion. In addition, assets in IRAs, accounts that are also tax protected and include assets from rolled-over balances from employer-sponsored plans, measured over $4.2 trillion in 2006. There are several different categories of DC plans. Most of these plans are types of cash or deferred arrangements (CODA), in which employees can direct pre-tax dollars, along with any employer contributions, into an account, with assets growing tax deferred until withdrawal. The 401(k) plan is the most common, covering over 85 percent of active DC participants. Certain types of tax-exempt employers may offer plans, such as 403(b) or 457 plans, which have many features similar to 401(k) plans. Many employers match employee contributions, generally based on a specified percentage of the employee’s salary and the rate at which the participant contributes. Small business owners may offer employees a Savings Incentive Match Plan for Employees of Small Employers (SIMPLE) or a Simplified Employee Pension Plan (SEP), two types of DC plans that have reduced regulatory requirements for sponsors. Other types of DC plans keep the basic individual account structure of the 401(k), but with different requirements and employer practices. Some are designed primarily for employer contributions. These include money purchase plans, which specify fixed annual employer contributions; profit sharing plans, in which the employer decides annual contributions, perhaps based on profits, into the plan, and allocations of these to each participant; and employee stock ownership plans (ESOPs), in which contributions are primarily invested in company stock. Building up retirement savings in DC plans rests on factors that are, to some degree, outside of the control of the individual worker, as well as behaviors an individual does control (see fig. 1). Factors outside the individual’s direct control include the following: Plan sponsorship—the employer’s decision to sponsor a plan, as well as participation eligibility rules. Employer contributions—whether the sponsor makes matching or noncontingent contributions. Investment options—the plan sponsor’s decisions about investment options to offer to participants under the plan. Market returns on plan assets—market performance of plan assets. Key individual decisions and behaviors that may affect retirement savings include the following: Employee contributions—deposits into the plan account, typically out of current wages. Investment decisions—how to invest plan assets given investment options offered under the plan. balances, which usually incur a tax penalty. Similarly, taking out a loan from a plan, if allowed, may reduce future balances if the loan is not repaid in full and treated as a withdrawal, or by lowering investment returns. Rollover—upon separation from a job, a participant may transfer the plan account balance to an IRA, which maintains most of the same tax preferences on the balances, move it to a new tax-qualified plan, or leave the money in the old plan. Alternatively, any cash withdrawal would likely be subject to income tax and penalties. Age at retirement—the decision as to when to retire determines how many years the worker has to accumulate plan balances and how long the money has to last in retirement. There is little consensus about how much constitutes “enough” savings to have going into retirement. We may define retirement income adequacy relative to a standard of minimum needs, such as the poverty rate, or to the consumption spending that households experienced during working years. Some economists and financial advisors consider retirement income adequate if the ratio of retirement income to pre-retirement income—or replacement rate—is between 65 and 85 percent. Retirees may not need 100 percent of pre-retirement income to maintain living standards for several reasons. Retirees will no longer need to save for retirement, retirees’ payroll and income tax liability will likely fall, work expenses will no longer be required, and mortgages and children’s education and other costs may have been paid off. However, some researchers cite uncertainties about future health care costs and future Social Security benefit levels as reasons to suggest that a higher replacement rate, perhaps above 100 percent or higher, would be considered adequate. To achieve adequate replacement rate levels, retirees depend on different sources of income to support themselves in retirement. Social Security benefits provide the bulk of retirement benefits for most households. As of 2004, annuitized pension benefits provided almost 20 percent of total income to households with someone age 65 or older, while Social Security benefits provided 39 percent. Social Security benefits compose over 50 percent of total income for two-thirds of households with someone age 65 or older, and at least 90 percent of income for one-third of such households. Table 2 shows estimated replacement rates from Social Security benefits for low and high earners retiring in 2007 and 2055, as well as the remaining amount of pre-retirement income necessary to achieve a 75 percent replacement rate. These figures give rough guidelines for how much retirement income workers might need from other sources, such as employer-sponsored pensions, as well as earnings and income from other savings or assets. It is important to keep certain economic principles in mind when evaluating the effectiveness of retirement accounts, or any pensions, in providing retirement income security. First, balances accumulated in a DC plan may not represent new saving; individuals may have saved in another type of account in the absence of a DC plan or its tax preferences. Second, evaluating worker income security should consider total compensation, not just employer contributions to DC plans. All else equal, we should generally expect more generous employer-sponsored pension benefits to lower cash wages and that the split between current wages and deferred compensation is largely a reflection of labor market conditions, tax provisions, and worker and employer preferences. Many workers do not have DC plans, and median savings levels among participants show modest balances. While it is worth noting that for workers nearing retirement age, DC plans were not considered primary pension plans for a significant portion of their working careers, participation rates and median balances in such plans are low across all ages. Only 36 percent of working individuals were actively participating in a DC plan, according to data from the 2004 SCF. Further, workers aged 55 to 64 had median balances totaling $50,000 in account-based retirement savings vehicles, including DC plans and rollover accounts. Leakage, when workers withdraw DC savings before retirement age, can also reduce balances; almost half of those taking lump-sum distributions upon leaving a job reported cashing out their balances for non-retirement purposes. Participation among lower-income workers was particularly limited, and those who did have accounts had very low balances. The majority of workers, in all age groups, are not participating in DC plans with their current employers. Employers do not always offer retirement plans, and when they do, plans may have eligibility restrictions initially, and some eligible workers do not choose to participate. According to our analysis of the 2004 SCF, only 62 percent of workers were offered a retirement plan by their employer, and 84 percent of those offered a retirement plan participated. Only 36 percent of working individuals participated in a DC plan with their current employer (see fig. 2). Data indicated similar participation rates for working households, as 42 percent of households had at least one member with a current DC plan. For many workers who participated in a plan, overall balances in DC plans were modest, suggesting a potentially small contribution toward retirement security for most plan participants and their households. However, since DC plans were less common before the 1980s, older workers would not have had access to these plans their whole careers. In order to approximate lifetime DC balances when discussing mean and median DC balances in this report, our analysis of the 2004 SCF aggregates the “total balances” of DC plans with a current employer, DC plans with former employers that have been left with the former employer, and any retirement plans with former employers that have been rolled over into a new plan or an IRA. Workers with a “current or former DC plan” refers to current workers with one or more of those three components. For all workers with a current or former DC plan, the median total balance was $22,800. For all households with a current or former DC plan, the median total balance was $27,940 (see fig. 3). For individuals nearing retirement age, total DC plan balances are still low. Given trends in coverage since the 1980’s, older workers close to retirement age are more likely than younger ones to have accrued retirement benefits in a DB plan. However, older workers who will rely on DC plans for retirement income may not have time to substantially increase their total savings without extending their working careers, perhaps for several years. Among all workers aged 55 to 64 with a current or former DC plan, the median balance according to the 2004 SCF was $50,000, which would provide an income of about $4,400 a year, replacing about 9 percent of income for the average worker in this group. Among all workers aged 60 to 64 with a current or former DC plan, the median balance was $60,600 for their accounts. Markedly higher values for mean balances versus median balances in figure 3 illustrate that some individuals in every age group are successfully saving far more than the typical individual, increasing the mean savings. These are primarily individuals at the highest levels of income. Leakage, or cashing out accumulated retirement savings for non- retirement purposes, adversely affects account accumulation for some of those with accounts, particularly for lower-income workers with small account balances. Participants who withdraw money from a DC plan before age 59 ½ generally pay ordinary income taxes on the distributions, plus an additional 10 percent tax in most circumstances. Participants may roll their DC plan balances into another tax-preferred account when they leave a job, and employers are required, in the absence of participant direction, to automatically roll DC account distributions greater than $1,000 but not greater than $5,000 into an IRA, or to leave the money in the plan. As of 2004, 21 percent of households in which the head of household was under 59, had ever received lump-sum distributions from previous jobs’ retirement plans. Among these households that received lump-sum distributions, 47 percent had cashed out all the funds, 4 percent cashed out some of the funds, and 50 percent preserved all the funds by rolling them over into another retirement account. Workers were more likely to roll over funds when the balances are greater. Among households that had cashed out all retirement plans with former employers, the median total value of those funds was $6,800. For households that had rolled over all retirement plans with former employers, the median total value of rolled- over funds was $24,200. Some evidence suggests that pre-retirement withdrawals may be decreasing. One study finds that those receiving lump-sum distributions are more likely to preserve funds in tax-qualified accounts than they were in the past. For example, data show that in 1993, 19 percent of lump-sum distributions recipients preserved all of their savings by rolling them into a tax-qualified account, compared to 43 percent in 2003. Further, 23 percent used all of their distribution for consumption in 1993, declining to 15 percent in 2003 (see fig. 4). According to the same study, age and size of the distribution are major determinants of whether or not the distribution is preserved in a tax-qualified account. For example, the authors found 55.5 percent of recipients aged 51 to 60 rolled their entire distribution in a tax-qualified account compared with 32.7 percent of recipients 21 to 30. Additionally, 19.9 percent of distributions from $1 to $499 were rolled over in tax-qualified accounts, as opposed to 68.1 percent of distributions of $50,000 or more. Additionally, some participants take loans from their DC plan, which may reduce plan savings. One survey found that in 2005, 85.2 percent of employers surveyed offered a loan option. Most eligible participants do not take loans, and one analysis finds that at the year end 2006, loans amounted to 12 percent of account balances for those who had loans. Individuals may prefer to take out pension loans in lieu of other lines of credit because pension loans require no approval and have low or no transaction costs. Borrowers also pay the loan principal and interest back to their own accounts. However, someone borrowing from a DC plan may still lose money if the interest on the loan paid back to the account is less than the account balance would have earned if the loan had not been taken. Further, loans not paid back in time, or not paid back before the employee leaves the job, may be subject to early withdrawal penalties. No data have been reported on the rate of loan defaults, but it is expected to be much lower where repayments are made by payroll withholding. However, a loan feature may also have a positive effect on participation, as some workers may choose to participate who otherwise might not, precisely because they can borrow from their accounts for non-retirement purposes at relatively low interest rates. Among workers in the lowest income quartile, only 8 percent participated in a current DC plan, a result of markedly lower access as well as lower participation than the average worker (see fig. 5). Only 25 percent of workers in the lowest income quartile were offered any type of retirement plan by their employer, and among those offered a retirement plan, 60 percent elected to participate, compared with 84 percent among workers of all income levels. Workers in the lower half of the income distribution with either current or former DC plans had total median balances of $9,420. Older workers who were less wealthy also had limited retirement savings. Workers with a current or former DC plan, aged 50-59 and at or below the median level of wealth, had median total savings of only $13,800. Workers with a current or former DC plan, aged 60-64 and at or below the median level of wealth, had median total savings of $18,000, a level that could provide at best only a limited supplement to retirement income. If converted into a single life annuity at age 65, this balance would provide only $132 per month—about $1,600 per year. Notably, workers with low DC balances were actually less likely to have a DB pension to fall back on than workers with higher DC balances. Among all workers participating in current or former DC plans, only 17 percent of those in the bottom quartile for total plan savings also were covered by a current DB plan. In contrast, 32 percent of those in the top quartile for total DC savings also had DB coverage. Among all workers with a current or former DC plan, the plan balances for those with DB coverage were higher than for those without DB coverage. The median DC balance for workers with a DB account was $31,560, while the median DC balance for someone without a DB account was $20,820. Simulations of projected retirement savings in DC plans suggest that a large percentage of workers may accumulate enough over their careers to replace only a small fraction of their working income, although results vary widely by income levels and depend on model assumptions. Projected savings allow us to analyze how much workers might save over a full working career under a variety of conditions in a way that analyzing current plan balances cannot, since DC plans have become primary employer-sponsored plans only relatively recently. Baseline simulations of projected retirement savings for a hypothetical 1990 birth cohort indicate that DC plan savings would on average replace about 22 percent of annualized career earnings, but provide no savings to almost 37 percent of the working population, perhaps because of different factors — working for employers who do not offer a plan, choosing not to participate, or withdrawing any accumulated plan savings prior to retirement. Further, projected DC account balances vary widely by income quartile, with workers in the lowest-income quartile saving enough for about a 10 percent replacement rate, while those in the highest quartile saving enough for a 34 percent replacement rate, on average. Assuming changes in certain plan features, individual behavior, or market assumptions, such as increased participation or account rollover rates, increased projected average savings and increased the number of workers who had some DC plan savings at retirement, especially for low-income workers. Other scenarios, such as assuming higher contribution limits or delaying retirement, raised average replacement rates, but with more of the positive impact on higher-income workers and having little effect on reducing the number of workers with no savings at retirement. Our projections, based on a sample of workers born in 1990, show that workers would save enough in their DC plans over their careers to produce, when converted to a lifetime annuity at the time of retirement, an average of $18,784 per year in 2007 dollars (see table 3). The projections assume that all workers fully annuitize all accumulated DC plans balances at retirement, which occurs sometime between age 62 and 70. Participants are assumed to always invest all plan assets in life cycle funds, and stocks earn an average real annual return of 6.4 percent. This $18,784 annuity would replace, on average, 22.2 percent of annualized career earnings for workers in the cohort. Savings and replacement rates vary widely across income groups. Almost 37 percent of workers in this cohort have no projected DC plan savings at retirement, which brings down overall average replacement rates. Workers in the lowest income quartile accumulate DC plan savings equivalent to an annuity of about $1,850 per year, or a 10.3 percent replacement rate, and 63 percent of this group have no plan savings by the time they retire. In contrast, highest income quartile workers save enough to receive about $50,000 per year in annuity income, enough for a 33.8 percent replacement rate. Even in this highest-income group, over 16 percent of workers have zero plan savings at retirement. In all cases, our replacement rates include projected savings only in DC plans. Retirees may also receive benefits from DB plans, as well as from Social Security, which typically replaces a higher percentage of earnings for lower-income workers. Projected household-level plan savings show a higher average replacement rate of 33.8 percent, with about 29 percent of households having no plan savings at retirement. When we assume that plan assets earn a lower average real annual return of 2.9 percent, average replacement rates from DC plan savings fall to about 16 percent for the sample. Under this assumption, workers in the lowest-income quartile receive an average 7.1 percent replacement rate from DC plans, while highest-income quartile workers receive an average 25 percent replacement rate. Lower rates of return affect the percentage of workers with no accumulated DC plan savings only slightly, perhaps because on the margins some participants might choose (or have their employers choose) to cash out lower balances. Table 3 also shows savings statistics for sub-samples of the cohort who have a better chance of accumulating significant DC plan savings, such as those workers who have long-term eligibility to participate in a plan or who work for many years. As expected, these groups have higher projected savings; replacement rates also show more even distribution across income groups, compared to those in the full sample. However, we still see a significant portion of the workers with no DC savings at retirement. First, we limit the sample only to those workers who are eligible to participate in a plan for at least 15 years over their careers. Average replacement rates for this group measure 33.5 percent, with rates ranging from 21.7 percent for lowest income quartile workers to 42.3 percent for the highest quartile. Even with such long-term eligibility for plan coverage, however, 15.6 percent of these workers, and almost one- third of lowest-income workers, have nothing saved in DC plans at the time they retire. This could result from workers choosing not to participate or from cashing out plan balances prior to retirement. We also analyze the prospects of workers with long-term attachment to the labor market, for which we use people who work full-time for at least 25 years, without regard to plan coverage or participation. Among these workers, average DC plan savings at retirement account for a 26.5 percent replacement rate. Still, almost 29 percent of these workers have no projected savings. This suggests that while DC plans have the potential to provide significant retirement income, saving may be difficult for some workers who work for many years, even among those whose employers offer a plan. Our simulations indicate that increasing participation and reducing leakage out of DC plans may have a particularly significant impact on overall savings, especially for lower-income workers. Of the changes in the model assumptions that we simulated, these had the broadest effect on savings because they not only raised average savings for the entire sample, but had a relatively strong impact on workers in the lowest income quartile and on the number of workers with no DC plan savings at retirement. While these assumptions represent stylized scenarios, they illustrate the potential effect of such changes on savings. We project DC plan savings assuming that all employees of a firm that sponsors a DC plan participate immediately, rather than having to wait for eligibility or choosing not to participate. In our baseline projections, 6 percent of workers whose employers sponsor a plan are ineligible to participate, and 33 percent of those eligible do not choose to participate; therefore, this assumption significantly raises plan participation rates among workers. Accordingly, average DC savings rise by almost 40 percent, raising average replacement rates to 35 percent, and the percentage of the population with no savings at retirement drops by half, down to 17.7 percent (see table 4). Assuming automatic eligibility and participation raises projected plan savings significantly for lower-wage workers, more than doubling the annuity equivalent of retirement savings for the lowest-income quartile. Workers in the highest income group also increase savings under this scenario, with plan savings rising by 30 percent. This change in projected savings suggests that automatically enrolling new employees in plans as a default could have a significant positive impact on DC balances, especially for low-income workers whose jobs offer a plan, although this stylized scenario likely describes a more extreme change in eligibility and participation than plans are likely to implement under automatic enrollment, and that higher participation and savings would raise employer’s pension costs, perhaps leading to a reduction in benefits or coverage. Another stylized scenario we model assumes that all workers who have a DC plan balance always keep the money in a tax-preferred account upon leaving a job, either by keeping the money in the plan, transferring it to a new employer plan, or rolling it into an IRA, rather than cashing out any accumulated savings. Eliminating this source of leakage raises average annuity income from DC plans by almost 11 percent and average replacement rates from 22.2 percent in the baseline to 25.6 percent; it also reduces the percentage of the cohort with no DC savings at retirement by over 25 percent. As with the instant participation scenario, “universal rollover” raises annuity savings and reduces the number of retirees with zero plan savings by the biggest percentages among lower-income workers, suggesting that cashing out accumulate plan savings prior to retirement may be a more significant drain on retirement savings for these groups. These results indicate that policies to encourage participants to keep DC plan balances in tax-preferred retirement accounts, perhaps by making rollover of plan assets a default action in plans, may have a broad positive impact on retirement savings. Other changes we make in our projections related to plan features or individual behavior affect average replacement rates overall, but with less impact on lower-income workers’ replacement rates and on the number of workers with zero plan savings at retirement. These scenarios include assumed changes in annual contribution limits and retirement decisions (see table 5). We model projected retirement savings assuming that annual DC contribution limits for employees rise from $15,500 to $25,000, and the combined employer-employee maximum contribution level rises from $45,000 to $60,000, starting in 2007. Higher annual maximum contributions affect projected savings almost exclusively among the highest-income group, indicating that few workers earning less are likely to contribute at existing maximum levels. The highest income quartile replacement rises from 33.8 to 38.5 percent, while replacement rates hardly change in the lower income groups. Similarly, this scenario has almost no impact on the percentage of workers with DC plan savings at retirement. Finally, we model retirement savings in two scenarios in which workers delay retirement by 1 or 3 years. Encouraging workers to retire later has been suggested as a key element in improving retirement income security, by increasing earnings, allowing more time to save for retirement, and reducing the length of retirement. In our projections, delaying retirement not only provides more years to contribute to and earn returns on plan balances but also might raise annual retirement income because older retirees receive more annuity income for any given level of savings, holding all else equal. In our projections, working longer modestly raises retirement savings in our projections. Working one extra year changes projected annuity income by 5.8 percent, but has little effect on the percentage of people with no DC savings in our projections. Delaying retirement by 3 years raises annuity income from DC plans by 20.9 percent on average, with replacement rates rising from 22.2 percent in the baseline to 25.7 percent overall. The 3-year delay increases annuity levels somewhat evenly across income groups, with higher-income workers showing slightly higher increases. Overall, working an extra 3 years raises average replacement rates about as much as universal account rollover would, but with little reduction in workers with no retirement savings. Thus, while working longer would likely raise workers’ incomes, and in most cases retirement benefits from other sources such as Social Security, our projections show that this change alone would have a modest impact on retirement income from DC plans, particularly regarding lower-income workers and those not already saving in DC plans in the baseline. Recent regulatory and legislative changes and proposals could have positive effects on DC plan coverage, participation, and saving. The Pension Protection Act of 2006 (PPA) facilitated the adoption of automatic plan features by plan sponsors that may increase DC participation and savings within existing plans. Proposals to expand the saver’s credit could similarly encourage greater contributions by low-wage workers who are already covered by a DC plan. Other options, like the so-called “State- K” proposal, in which states would design and partner with private financial institutions to offer low-cost DC plans employers could provide to employees, would seek to expand coverage among workers without current plans by encouraging employers to sponsor new plans. Other options would try to increase retirement account coverage by increasing the use of IRAs or creating new retirement savings vehicles outside of the voluntary employer-sponsored pension framework. Such proposals include automatic IRAs, in which employers would be required to allow employees through automatic enrollment to contribute to IRAs by direct payroll deposit, or universal accounts proposals, in which all workers would be given a retirement account regardless of whether they had any employment based pension coverage. Changing certain traditional DC plan defaults may have a significant impact on DC participation and savings. Research suggests that employees exhibit inertia regarding plan participation and contributions, which can reduce DC savings by failure to participate or increase savings over time. To reverse the effects of these tendencies, some experts have suggested changing default plan actions to automatically sign up employees for participation, escalate contributions, and set default investment options unless workers opt out. Some studies have shown that automatic enrollment may increase DC plan participation. For example, one study of a large firm, automatic enrollment increased participation from 57 percent for employees eligible to participate 1 year before the firm adopted automatic enrollment to 86 percent for those hired under automatic enrollment. Another study finds that, prior to automatic enrollment, 26 to 43 percent of employees at 6 months’ tenure participated in the plan at three different companies; under automatic enrollment 86 to 96 percent of employees participated. Some also advocate automatically rolling over DC savings into an IRA when employees separate from their employers to further increase retirement savings. Our own simulations shows that universal account rollover to a tax-preferred account, such as a new plan or an IRA, would increase projected retirement savings by 11 percent on average, with the biggest percentage increases for lowest-income workers. Various regulatory and legislative changes have focused on default DC plan features. In 1998, the IRS first approved plan sponsor use of automatic enrollment—the ability for plans to automatically sign employees up for a 401(k) plan (from which the employee can opt out), and—subsequently issued several rulings that clarified the use of other automatic plan features and the permissibility of automatic features in 403(b) and 457 plans. Accordingly, the percentage of 401(k) plans using automatic plan features has increased in recent years. One annual study of plan sponsors found that in 2004, 12.4 percent of 401(k) plans were automatically enrolling participants, and this number increased to 17.5 percent of plans in 2005. The percentage of plans automatically increasing employee contributions also rose from 6.8 percent in 2004 to 13.6 percent in 2005. Some experts have argued that initially, some plan sponsors may have been hesitant to use automatic plan features because of legal ambiguities between state and federal law. However, clarifications relating to automatic enrollment and default investment in the PPA have led some plan sponsors and experts to expect more plans to adopt automatic plan features. Automatic DC plan features, however, may create complications for sponsors and participants that may limit any effect on savings and participation. Auto enrollment may not help expand plan sponsorship; in fact, sponsors who offer a matching contribution may not want to offer automatic enrollment if they believe this will raise their pension costs. Also, if sponsors automatically invest contributions in a low-risk fund such as a money market fund, this could limit rates of return on balances. However, choosing a risky investment fund could subject automatic contributions to market losses. Some employees may not realize they have been signed up for a plan, and may be displeased to discover this, particularly if their automatically invested contributions have lost money. Other proposals would target plan formation or increase participation and retirement savings by expanding worker access to other account-based retirement savings vehicles like IRAs. Some of these alternative retirement savings proposals are voluntary in design, while others are more universal. Gen. Assem., Reg. Sess. (Md. 2006). employee access to account-based retirement plans. However, it is unclear to what extent employers would adopt such plans. The Automatic IRA: The Automatic IRA proposal would make direct deposit or payroll deduction saving into an IRA available to all employees by requiring employers that do not sponsor any retirement plan to offer withholding to facilitate employee contributions. To maximize participation, employees would be automatically enrolled at 3 percent of pay, or could elect to opt out or to defer a different percentage of pay to an IRA, up to the maximum IRA annual contribution limit ($4,000 for 2007; $5,000 for 2008). Employers would not be required choose investments or set up the IRAs, which would be provided mainly by the private-sector IRA trustees and custodians that currently provide them. Employers also would not be required or permitted to make matching contributions, and would not need to comply with the Employee Retirement Income Security Act of 1974 (ERISA) or any qualified plan standards such as non- discrimination requirements. Employers, however, would be required to provide notice to employees, including information on the maximum amount that can be contributed to the plan on an annual basis. One congressional proposal would require employers, other than small or new ones, to offer payroll deposit IRA arrangements to employees not eligible for pension plans and permit automatic enrollment in such IRAs in many circumstances. Participating IRAs would be required to offer a default investment consisting of life cycle funds similar to those offered by the Thrift Savings Plan, the DC plan for federal workers, or other investments specified by a new entity established for that purpose. Universal accounts: Similar to the automatic IRA, universal account (UA) proposals aim to establish retirement savings accounts for all workers, and vary slightly based on employment-based pension access. Additionally, some proposals would have employers contribute to the account, whereas other proposals would also have the federal government match contributions. One proposal suggests a 2 percent annual contribution from the federal government regardless of individual contributions, while another would provide for individual contributions only, capped at $7,500 per year. In 1999, the Clinton Administration proposed a UA to be established for each worker and spouse with earnings of at least $5,000 annually. Individuals would receive a tax credit of up to $300 annually. Additionally, workers could voluntarily contribute to the account up to specified amounts with a 50 to 100 percent match by the federal government. This match would come in the form of a tax credit, and total voluntary contributions, including government contributions, would be limited to $1,000. Both the credit and the match would phase out as income increases, providing a progressive benefit and targeting low- and middle-income workers. Federal contributions would have revenue implications, while requiring employer contributions could increase employer compensation costs. Other proposals would expand the size and scope of the saver’s credit to encourage greater contributions by those low-wage workers who are already covered by a DC plan that allows employee contributions. Currently, the saver’s credit, originally proposed in 2000 as an outgrowth of the 1999 UA proposal as a government matching deposit on some voluntary contributions to IRAs and 401(k) plans, provides a nonrefundable tax credit to low- and middle-income savers of up to 50 percent of their annual IRA or 401(k) contributions of up to $2,000. However, according to one analysis, because the credit is nonrefundable, only about 17 percent of those with incomes low enough to qualify for the credit would receive any benefit if they contributed to a plan. Some analysts think that expanding the saver’s credit, or creating direct transfers such as tax rebates or deposits into retirement savings accounts, could increase plan contributions specifically for low- and middle-income workers. Making the saver’s credit refundable to the participant could also provide a direct transfer to the tax filer in lieu of a retirement account match, but offers no assurance that funds would be saved or deposited into a retirement account. A refundable tax credit would also have revenue implications for the federal budget. The DC plan has clearly overtaken the DB plan as the principal retirement plan for the nation’s private sector workforce, and its growing dominance suggests its increasingly crucial role in the retirement security of current and future generations of workers. The current DC-based system faces major challenges, like its DB-based predecessor, in terms of coverage, participation, and lifetime distributions. Achieving retirement security through DC plans carries particular challenges for workers, since accumulating benefits in an account-based plan requires more active commitment and management from individuals than it does for DB participants. Since workers must typically sign up and voluntarily reduce their take home pay to contribute to their DC plans, invest this money wisely over their working years, and resist withdrawing from balances prior to retirement, it is perhaps to be expected that even those who have the opportunity to participate save little. While our results on both current and projected plan balances suggest that while some workers save significant amounts toward their retirement in DC plans, a large proportion of workers will likely not save enough in DC plans for a secure retirement. Of particular concern are the retirement income challenges faced by lower earners. Many of these workers face competing income demands for basic necessities that may make contributions to their retirement plans difficult. Further, the tax preferences that may entice higher-income workers to contribute to their DC plans may not entice low-income workers who have plan coverage, since these workers face relatively low marginal tax rates. Our model results suggest that other measures, such as automatic enrollment and rollover of funds may make a difference for some lower income workers. Should pension policy, as embodied by the automatic provisions in PPA, continue to move in this direction, it should focus on those workers most in need of enhanced retirement income prospects. We provided a draft of this report to the Department of Labor and the Department of the Treasury, as well as to five outside reviewers. Neither agency provided formal comments. We incorporated any technical comments we received throughout the report as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this letter. At that time, we will send copies of this report to the Secretary of Labor, the Secretary of the Treasury, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you have any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To analyze saving in DC plans, we examined data from the Federal Reserve Board’s Survey of Consumer Finances (SCF). This triennial survey asks extensive questions about household income and wealth components. We used the latest available survey, from 2004. The SCF is widely used by the research community, is continually vetted by the Federal Reserve and users, and is considered to be a reliable data source. The SCF is believed by many to be the best source of publicly available information on household finances. Further information about our use of the SCF, including sampling errors, as well as definitions and assumptions we made in our analysis are detailed below. We also reviewed published statistics in articles by public policy groups and in academic studies. To analyze how much Americans can expect to save in DC plans over their careers and the factors that affect these savings, we used the Policy Simulation Group’s (PSG) microsimulation models to run various simulations of workers saving over a working career, changing various inputs to model different scenarios that affect savings at retirement. PENSIM is a pension policy simulation model that has been developed for the Department of Labor to analyze lifetime coverage and adequacy issues related to employer-sponsored pensions in the United States. We projected account balances at retirement for PENSIM-generated workers under different scenarios representing different pension features, individual behavioral decisions, and market assumptions. See below for further discussion of PENSIM and our assumptions and methodologies. To analyze those plan- or government-level policies that might best increase participation and savings in DC plans, we synthesized information gathered from interviews of plan practitioners, financial managers, and public policy experts, as well as from academic and policy studies on DC plan participation and savings. We also researched current government initiatives and policy proposals to broaden participation in account-based pension plans and increase retirement savings. We conducted our work from July 2006 to October 2007 in accordance with generally accepted government auditing standards. The 2004 SCF surveyed 4,522 households about their pensions, incomes, labor force participation, asset holdings and debts, use of financial services, and demographic information. The SCF is conducted using a dual-frame sample design. One part of the design is a standard, multi-stage area-probability design, while the second part is a special oversample of relatively wealthy households. This is done in order to accurately capture financial information about the population at large as well as characteristics specific to the relatively wealthy. The two parts of the sample are adjusted for sample nonresponse and combined using weights to provide a representation of households overall. In addition, the SCF excludes people included in the Forbes Magazine list of the 400 wealthiest people in the United States. Furthermore, the 2004 SCF dropped three observations from the public data set that had net worth at least equal to the minimum level needed to qualify for the Forbes list. The SCF is a probability sample based on random selections, so the 2004 SCF sample is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 4 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. All percentage estimates based on GAO analysis of 2004 SCF data used in this report have 95 percent confidence intervals that are within plus-or- minus 4 percentage points, with the following exceptions described in table 6 below. Other numerical estimates based on GAO analysis of 2004 SCF data used in this report have 95 percent confidence intervals that are within 25 percent of the estimate itself, with exceptions described in table 7. Because of the complexity of the SCF design and the need to suppress some detailed sample design information to maintain confidentiality of respondents, standard procedures for estimate of sampling errors could not be used. Further, the SCF uses multiple imputations to estimate responses to most survey questions to which respondents did not provide answers. Sampling error estimates for this report are based on a bootstrap technique using replicate weights to produce estimates of sampling error that account for both the variability due to sampling and due to imputation. The SCF collects detailed information about an economically dominant single individual or couple in a household (what the SCF calls a primary economic unit), where the individuals are at least 18 years old. We created an additional sample containing information on 7,471 individuals by separating information about respondents and their spouses or partners and considering them separately. When we discuss individuals in this document, we are referring to this sample. When we refer to all workers, we are referring to the subpopulation of workers within this individual sample. In households where there are additional adult workers, beyond the respondent and the spouse or partner, who may also have earnings and a retirement plan, information about these additional workers is not captured by the SCF and is therefore not part of our analysis. It is also important to note that the SCF was designed to be used as a household survey, and some information could not be broken into individual-level information. Where that was the case, we presented only household-level information. We defined “worker” relatively broadly and opted to begin with the set of all those who reported that they were both working and some other activity, including for example, “worker plus disabled” and “worker plus retired.” We then excluded those workers who reported that they were self-employed from our analysis. Our definition of DC plans includes the following plans: 401(k); 403(b); 457; thrift/savings plan; profit-sharing plan; portable cash option plan; deferred compensation plan, n.e.c.; SEP/SIMPLE; money purchase plan; stock purchase plan; and employee stock ownership plan (ESOP). The SCF and other surveys that are based on self-reported data are subject to several other sources of nonsampling error, including the inability to get information about all sample cases; difficulties of definition; differences in the interpretation of questions; respondents’ inability or unwillingness to provide correct information; and errors made in collecting, recording, coding, and processing data. These nonsampling errors can influence the accuracy of information presented in the report, although the magnitude of their effect is not known. Our analysis of the 2004 SCF yielded slightly lower participation rates than other data sets that consider pensions. For example, 2004 Bureau of Labor Statistics (BLS) data indicate a somewhat higher rate of active participation in DC accounts, 42 percent, compared with our finding of 36 percent. One possible factor contributing to this difference is that BLS surveys establishments about their employees, while SCF surveys individuals who report on themselves and their households; it is possible that the SCF respondents may be failing to report all retirement accounts, while BLS is capturing a greater proportion of them. Also, the SCF considered both public and private sector workers, while the BLS statistic is only for private sector workers. Differences may also be explained by different definitions of workers and participation, question wording, or lines of questioning. The SCF appears to provide a lower bound on the estimation of pension coverage among 4 major data sets. To project lifetime savings in DC pensions, and related retirement plans with personal accounts, and to identify the effects of changes in policies, market assumptions, or individual behavior, we used the Policy Simulation Group’s (PSG) Pension Simulator (PENSIM) microsimulation models. PENSIM is a dynamic microsimulation model that produces life histories for a sample of individuals born in the same year. The life history for a sample individual includes different life events, such as birth, schooling events, marriage and divorce, childbirth, immigration and emigration, disability onset and recovery, and death. In addition, a simulated life history includes a complete employment record for each individual, including each job’s starting date, job characteristics, pension coverage and plan characteristics, and ending date. The model has been developed by PSG since 1997 with funding and input by the Office of Policy and Research at the Employee Benefits Security Administration (EBSA) of the U.S. Department of Labor with the recommendations of the National Research Council panel on retirement income modeling. PENSIM sets the timing for each life event by using data from various longitudinal data sets to estimate a waiting-time model (often called a hazard function model) using standard survival analysis methods. PENSIM incorporates many such estimated waiting-time models into a single dynamic simulation model. This model can be used to simulate a synthetic sample of complete life histories. PENSIM employs continuous- time, discrete-event simulation techniques, such that life events do not have to occur at discrete intervals, such as annually on a person’s birthday. PENSIM also uses simulated data generated by another PSG simulation model, SSASIM, which produces simulated macro-demographic and macroeconomic variables. PENSIM imputes pension characteristics using a model estimated with 1996 to 1998 establishment data from the BLS Employee Benefits Survey (now known as the National Compensation Survey). Pension offerings are calibrated to historical trends in pension offerings from 1975 to 2005, including plan mix, types of plans, and employer matching. Further, PENSIM incorporates data from the 1996-1998 Employee Benefits Survey (EBS) to impute access to and participation rates in DC plans in which the employer makes no contribution, which BLS does not report as pension plans in the NCS. The inclusion of these “zero-matching” plans enhances PENSIM’s ability to accurately reflect the universe of pension plans offered by employers. PENSIM assumes that 2005 pension offerings, included the imputed zero-matching plans, are projected forward in time. PSG has conducted validation checks of PENSIM’s simulated life histories against both historical life history statistics and other projections. Different life history statistics have been validated against data from the Survey of Income and Program Participation (SIPP), the Current Population Survey (CPS), Modeling Income in the Near Term (MINT3), the Panel Study of Income Dynamics (PSID), and the Social Security Administration’s Trustees Report. PSG reports that PENSIM life histories have produced similar annual population, taxable earnings, and disability benefits for the years 2000 to 2080 as those produced by the Congressional Budget Office’s long-term social security model (CBOLT) and as shown in the Social Security Administration’s 2004 Trustees Report. According to PSG, PENSIM generates simulated DC plan participation rates and account balances that are similar to those observed in a variety of data sets. For example, measures of central tendency in the simulated distribution of DC account balances among employed individuals is similar to those produced by an analysis of the Employee Benefit Research Institute (EBRI)-Investment Company Institute (ICI) 401(k) database and of the 2004 SCF. GAO performed no independent validation checks of PENSIM’s life histories or pension characteristics. In 2006, EBSA submitted PENSIM to a peer review by three economists. The economists’ overall reviews ranged from highly favorable to highly critical. While the economist who gave PENSIM a favorable review expressed a “high degree of confidence” in the model, the one who criticized it focused on PENSIM’s reduced form modeling. This means that the model is grounded in previously observed statistical relationships among individuals’ characteristics, circumstances, and behaviors, rather than on any underlying theory of the determinants of behaviors, such as the common economic theory that individuals make rational choices as their preferences dictate and thereby maximize their own welfare. The third reviewer raised questions about specific modeling assumptions and possible overlooked indirect effects. PENSIM allows the user to alter one or more inputs to represent changes in government policy, market assumptions, or personal behavioral choices and analyze the subsequent impact on pension benefits. Starting with a 2 percent sample of a 1990 cohort, totaling 104,435 people at birth. our baseline simulation includes some of the following key assumptions and features. For our report, we focus exclusively on accumulated balances in DC plans and ignore any benefits an individual might receive from DB plans or from Social Security. Our reported benefits and replacement rates therefore capture just one source of potential income available to a retiree. Workers accumulate DC pension benefits from past jobs in one rollover account, which continue to receive investment returns, along with any benefits from a current job. At retirement, these are combined into one account. Because we focus on DC plan balances only, we assume all workers are ineligible to participate in DB plans and do not track Social Security benefits. Plan participants invest all assets in their account in life cycle funds, which adjust the mix of assets between stocks and government bonds as the individual ages. Stocks return an annual nonstochastic real rate of return of 6.4 percent and government bonds have a real return of 2.9 percent per year. In one simulation, we use the government bond rate on all plan assets. Using different rates of return reflect assumptions used by OCACT in some of its analyses of trust fund investment. Workers purchase a single, nominal life annuity, typically at retirement, which occurs between the ages of 62 and 70. Anyone who becomes permanently disabled at age 45 or older also purchases an immediate annuity at their disability age. We eliminate from the sample cohort members who: 1) die before they retire, at whatever age; 2) die prior to age 55; 3) immigrates into the cohort at an age older than 25; or 4) becomes permanently disabled prior to age 45. We assume that the annuity provider charges an administrative load on the annuity such that in all scenarios the provider’s revenues balance the annuity costs (i.e., zero profit). Replacement rates equal the annuity value of DC plan balances divided by a “steady earnings” index. This index reflects career earnings, calibrated to the Social Security Administration’s age-65 average wage index (AWI). PENSIM computes steady earnings by first computing the present value of lifetime wages. Then, it calculates a scaling factor that, when multiplied by the present value of lifetime earnings for a 1990 cohort member earning the AWI from ages 21 to 65, produces the individual’s present value of lifetime earnings. This scaling factor is multiplied by AWI at age 65, then adjusted to 2007 dollars. Using this measure as opposed to average pay for an individual’s final 3 or 5 years of working, minimizes the problems presented by a worker who has irregular earnings near the end of his or her career, perhaps because of reduced hours. For household replacement rates, we use a combined annuity value of worker-spouse lifetime DC plan savings and a combined measure of steady family earnings. Starting from this baseline model, we vary key inputs and assumptions to see how these variations affect pension benefits and replacement rates at retirement. Scenarios we ran include: (1) Universal rollover of DC plan balances. All workers with a DC balance roll it over into an Individual Retirement Account or another qualified plan upon job separation, as opposed to cashing out the balance, in which case the money is assumed lost for retirement purposes. (2) Immediate eligibility and participation in a plan. A worker who would be offered a plan has no eligibility waiting period and immediately enrolls. This does not necessarily mean that the participant makes immediate or regular contributions; contribution levels are determined stochastically by PENSIM based on worker characteristics. (3) Delayed retirement. Workers work beyond the retirement age determined by PENSIM in the baseline run. In one scenario, workers work up to one extra year; in another, they delay retirement for up to 3 years, although 70 remains the maximum retirement age. (4) Raised contribution limits. We set annual contribution limits starting in 2007 to $25,000 per individual, up from $15,500 under current law, and $60,000 for combined employer-employee contributions, up from $45,000 under current law. These limits rise with cost of living changes in subsequent years, as is the case in our baseline model. Lifetime summary statistics of the simulated 1990 cohort’s workforce and demographic variables give some insight into the model’s projected DC savings at retirement that we report (see tables 8 and 9). The 78,045 people in the sample who have some earnings, do not immigrate into the cohort after age 25, live to age 55, and retire (or become disabled at age 45 or older), work a median 29.4 years full-time and 2.1 part-time, with median “steady” earnings of $46,122 (in 2007 dollars). Those whose earnings fall in the lowest quartile work full-time for only a median 14.1 years, while working part-time for 9.1 years, and 13.4 years for their longest-tenured job; this group’s median annual steady earnings measure $16,820. In contrast, those in the highest-quartile of earnings work for a median 34.8 years, including 19.5 years for their longest job, and have median steady earnings of $126,380 per year. The results also show that pension coverage varies somewhat across income groups. About 83 percent of workers in the lowest income quartile have at least one job in which they are covered by a DC plan throughout their working careers, and are eligible for DC plan coverage for a median 9.4 years. In contrast, at least 90 percent of workers in the highest three income quartiles have some DC coverage during their careers. Those in the highest income quartile are eligible for DC participation for a median 25.2 years throughout their career. Cross-sectional results of the sample cohort also provide some insights into the model’s assumptions, as well as some further insights into the relatively low projected sample replacement rates (see table 10). These statistics describe the working characteristics for each employed individual at a randomly determined age sometime between 22 and 62 in order to provide a snapshot of a “current” job for most of the sample. Among those employed at the time of the survey, 61.8 percent had an employer who sponsors a DC plan. Of these workers with a plan offered, 94 percent were eligible to participate, and among those eligible 67 percent participated. Taking all of these percentages together, this means that at any one time only 38.9 percent of the working population actively participated in a DC plan in our projections. Even among these participants, only 56.9 percent reported making a contribution to the plan in the previous year, while 45.7 percent had an employer contribution. Median combined employer-employee contributions in the previous year were 6.2 percent of earnings in our simulation. Other studies have projected DC plan savings for workers saving over their entire working careers. These studies generally find higher projected replacement rates from DC plan savings than our simulations do. However, each study makes different key assumptions, particularly about plan coverage, participation, and contributions. A 2007 study by Patrick Purcell and Debra B. Whitman for the Congressional Research Service (CRS) simulates DC plan replacement rates based on earnings, contributions, and the rate of return on plan balances. CRS projects savings for households that begin saving at age 25, 35, or 45. The study estimates 2004 earnings using the March 2005 CPS as starting wages, and assumes that households experience an annual wage growth rate of 1.1 percent. Households are randomly assigned a 6 percent, 8 percent, or 10 percent retirement plan contribution rate every year from their starting age until age 65. The study assumes households allocate 65 percent of their retirement account assets to Standard & Poor’s 500 index of stocks from ages 25 to 34, 60 percent to stocks from ages 35 to 44, 55 percent to stocks from ages 45 to 54, and 50 percent to stocks from age 55 and above, with the remaining portfolio assets invested in AAA-rated corporate bonds. A Monte Carlo simulation based on historical returns on stocks and bonds determines annual rates of return. Replacement rates represent annuitized DC plan balances at age 65 divided by final 5-year average pay. After running the simulations, CRS finds variation in replacement rates depending on rate of return, years of saving, and earning percentile. In the CRS “middle estimate,” the unmarried householder that saves for 30 years, has annual household earnings in the 50th percentile, contributes 8 percent each year until retirement, and earns returns on contributions in the 50th percentile would have a 50 percent replacement rate (see table 11). The projected replacement rate jumps to 98 percent at 40 years of saving, and 22 percent at just 20 years of saving. Assuming a 6 percent annual contribution reduces projected replacement rates by about 10 to 30 percent. For example, an unmarried householder at the 50th percentile of annual earnings and the 50th percentile of returns saving for 40 years is projected to have a replacement rate of 72 percent at a 6 percent annual contribution (see table 12). All CRS estimates, however, exceed those we report in projections in this report, in part because CRS assumes constant participation in, and contributions to, a DC plan. In addition, CRS calculated annuity equivalents of accumulated DC balances based on current annuity prices; for younger workers retiring several decades into the future, we would expect the price of a given level of annuity income to be higher than today’s levels because of longer life expectancies. This would lower the replacement rates for any projected lump sum. A 2005 study, by Sarah Holden of ICI and Jack VanDerhei of EBRI, simulates, as a baseline scenario, retirement savings at age 65 for a group in their 20s and 30s in the year 2000. The baseline assumes workers are continuously covered by a DC plan throughout their career, and that workers will continuously participate. However, the authors also run the model assuming this group will have participation rates similar to current rates by allowing workers to not be covered by, participate in, or contribute to a DC plan. Their model also incorporates the possibility that a participant might cash out a DC plan balance upon leaving a job. Replacement rates are calculated by earnings quartile for participants retiring between 2035 and 2039 as the annuity value of age-65 plan balances divided by final 5-year average pay. The EBRI/ICI baseline projections, starting with a sample of plan participants, show a median replacement rate of 51 percent for the lowest earnings quartile and 67 percent for the highest. (See table 13). The authors analyze the effect of other plan or behavioral assumptions. For example, replacement rates fall significantly when the projections relax the assumption of continuous ongoing eligibility for a 401(k) plan, although they remain higher than our projections, perhaps because the projections start with current participants and assume continuous employment. When the authors include nonparticipants and assume automatic enrollment with a 6 percent employee contribution and investment of assets in a life cycle fund, replacement rates rise significantly from projections without automatic enrollment. Although they project a larger effect on replacement rates resulting from automatic enrollment than our projections show, EBRI/ICI similarly shows a greater increase in savings for lower-income workers. A forthcoming study by Poterba, Venti, and Wise uses the Survey of Income and Program Participation (SIPP) to project DC plan balances at age 65. In order to project participation, the authors assume that DC plan sponsorship will continue to grow, although more slowly than during recent decades. They calculate participation by earnings deciles within 5- year age intervals. The authors assume that 60 percent of plan contributions are allocated to large capitalization equities, and 40 percent to corporate bonds, and assume an average nominal rate of return of 12 percent for equities and 6 percent for corporate bonds. In addition, the authors run a projection assuming the rate of return on equities is 300 basis points less than the historical rate. They determine a person’s likelihood of DC plan participation based on age, cohort, and earnings, as well as the probability of cashing out an existing DC plan balance when someone leaves a job. The authors simulate earnings histories based on data from the Health and Retirement Study (HRS), and impute earnings for younger cohorts for which data are not available. They assume an annual combined employee-employer contribution rate of 10 percent for each year an individual participates, and do not account for increases in annual contributions or changes made to DC plans in the Pension Protection Act, such as a possible increase in participation by automatically enrolling employees. The authors find retirement savings for individuals retiring by decade between 2000 and 2040 by lifetime earning deciles. For workers in the fifth earnings decile retiring in 2030 at age 65, the authors project a mean DC plan balance of $272,135 in 2000 dollars, and $895,179 for the highest earning decile (see table 14). Earners in the lowest and second deciles, however, project average balances of $1,372, and $21,917. The projected average DC plan assets for 2030 retirees fall to $179,540 for the fifth decile of earnings, $614,789 for the highest decile, and $810 for the lowest decile when the authors assume an annual rate of return 300 basis points below the historic rate of return (see table 15). Finally, a 2007 study by William Even and David Macpherson estimates replacement rate for those continuously enrolled in a DC plan between 36 and 65 years of age. The authors simulate a sample using the SCF, and generate an age earnings profile for their sample using data on pension- covered workers in the 1989 SCF. The authors also use the SCF to generate annual contributions to DC plans, which are estimated using a person’s earnings, age, education, gender, race, ethnicity, martial status, union coverage, and firm size. The authors also create an artificial sample for workers who are predicted to be eligible for a DC plan, but choose not to participate. Finally, the authors assume three different rates of return on pension contributions: a 3 percent rate of return based on historical returns on government bonds; a historic returns portfolio based on an account mix of 75 percent in stocks split between large and small capital equities, and 25 percent split between long term corporate bonds, long- term government bonds, midterm government bonds, and Treasury bills; a 6.5 percent real rate of returns based on the average real rate of return on DC plans from 1985 to 1994 for plans with over 100 participants. In calculating annuity rates, the authors rely on mortality tables for group annuitants as opposed to the population as a whole, and do not include the charge the company makes for marketing and administrative expenses. The authors find that replacement rates vary by income distribution. For example, low-income workers who are continuously enrolled in a DC plan at the median replacement rate distribution are estimated to have a 30 percent replacement rate. (see table 16) The average replacement rate for such workers is 44 percent. Middle-income and high-income workers have median replacement rates 31 percent and 35 percent respectively. The authors’ estimates are likely higher than ours because the authors assume continuous enrollment. In addition to the contact above, Charles A. Jeszeck, Mark M. Glickman, Katherine Freeman, Leo Chyi, Charles J. Ford, Charles Willson, Edward Nannenhorn, Mark Ramage, Joe Applebaum, and Craig Winslow made important contributions to this report.
Over the last 25 years, pension coverage has shifted primarily from "traditional" defined benefit (DB) plans, in which workers accrue benefits based on years of service and earnings, toward defined contribution (DC) plans, in which participants accumulate retirement balances in individual accounts. DC plans provide greater portability of benefits, but shift the responsibility of saving for retirement from employers to employees. This report addresses the following issues: (1) What percentage of workers participate in DC plans, and how much have they saved in them? (2) How much are workers likely to have saved in DC plans over their careers and to what degree do key individual decisions and plan features affect plan saving? (3) What options have been recently proposed to increase DC plan coverage, participation, and savings? GAO analyzed data from the Federal Reserve Board's 2004 Survey of Consumer Finances (SCF), the latest available, utilized a computer simulation model to project DC plan balances at retirement, reviewed academic studies, and interviewed experts. GAO's analysis of 2004 SCF data found that only 36 percent of workers participated in a current DC plan. For all workers with a current or former DC plan, including rolled-over retirement funds, the total median account balance was $22,800. Among workers aged 55 to 64, the median account balance were $50,000, and those aged 60 to 64 had $60,600. Low-income workers had less opportunity to participate in DC plans than the average worker, and when offered an opportunity to participate in a plan, they were less likely to do so. Modest balances might be expected, given the relatively recent prominence of 401(k) plans. Projections of DC plan savings over a career for workers born in 1990 indicate that DC plans could on average replace about 22 percent of annualized career earnings at retirement for all workers, but projected "replacement rates" vary widely across income groups and with changes in assumptions. Projections show almost 37 percent of workers reaching retirement with zero plan savings. Projections also show that workers in the lowest income quartile have projected replacement rates of 10.3 percent on average, with 63 percent of these workers having no plan savings at retirement, while highest-income workers have average replacement rates of 34 percent. Assuming that workers offered a plan always participate raises projected overall savings and reduces the number of workers with zero savings substantially, particularly among lower-income workers. Recent regulatory and legislative changes and proposals could have positive effects on DC plan coverage, participation, and savings, some by facilitating the adoption of automatic enrollment and escalation features. Some options focus on encouraging plan sponsorship, while others would create accounts for people not covered by an employer plan. Our findings indicate that DC plans can provide a meaningful contribution to retirement security for some workers but may not ensure the retirement security of lower-income workers.
In recognition of the cost, energy usage, and environmental impact of IT, the federal government has undertaken various initiatives to promote the acquisition and use of more efficient and environmentally friendly IT products, commonly referred to as electronic stewardship or “green IT.” According to OMB Circular A-11, green IT refers to the application of sustainable and environmentally efficient practices so that computing resources are used in a sustainable and environmentally efficient manner. Green IT applies to a broad range of activities that span the entire lifecycle of IT capital assets, including but not limited to the acquisition, operations and use, and disposition of equipment. These activities include developing programs for purchasing equipment that meets certain environmental standards, operating and managing IT equipment in ways that reduce energy usage and conserve resources, and disposing of equipment in ways that lessen the environmental impact of potentially hazardous waste. Purchasing equipment. Tools exist to help organizations purchase more environmentally friendly IT equipment. One such industry tool is the Electronic Product Environmental Assessment Tool (EPEAT®), which was developed along the lines of EPA and DOE’s Energy Star program to assist consumers in comparing and selecting laptop computers, desktop computers, and monitors with environmentally preferable attributes. Through EPEAT, manufacturers are rewarded for meeting increasing levels of energy efficiency and environmental standards by providing them with a certification label of bronze, silver, or gold. Using this tool, consumers can also evaluate the design of an electronic product for energy conservation, reduced toxicity, extended lifespan, and end-of- life recycling, among other things. Operating and managing IT resources. Effective management of IT equipment in use can help reduce energy usage and conserve resources. For example, monitoring and efficiently managing IT equipment’s power use can help organizations track and reduce specific energy costs. Software can be used to turn off or power down personal computers when they are not being used and to track network power usage. New techniques, such as computer virtualization, are also being used to save energy. Computer virtualization allows multiple, software-based virtual machines with different operating systems to run in isolation, side-by-side, on the same physical machine. Virtual machines can be stored as files, making it possible to save a virtual machine and move it from one physical server to another. Virtualization is often used as part of cloud computing. Disposing of equipment. Finally, IT equipment may be donated, sold, recycled, or returned to the manufacturer in lieu of disposal in a landfill. Organizations donate usable electronics to qualified organizations, such as public schools, and sell usable or refurbishable equipment to the general public. Another option is to recycle unusable and unsold equipment using environmental practices that help keep components out of landfills and recover materials for use in the manufacture of new products. The federal government purchases or leases approximately 1 million computers and monitors each year and estimates it will spend about $79 billion on IT in fiscal year 2011. This investment in IT has environmental impacts that can be described in terms of cost, energy usage, and waste. Examples of these impacts include the following:  According to the Federal Electronics Challenge (FEC) Program Manager, the federal government disposes of approximately 750,000 computers and monitors that have reached the end of their useful lives each year—50 percent are reused; 40 percent are recycled; and 10 percent are discarded.  Office electronics can contain materials such as lead, mercury, and other constituents that are harmful to human health and the environment.  The global greenhouse gas emissions attributable to information and communication technologies, including data centers and computers, are nearly 2 percent of all emissions. According to FEC data, there are potential environmental benefits of implementing green IT in federal agencies. While we did not validate the data reported by various federal agencies, FEC estimated that energy savings of over 500,000 megawatts were achieved in fiscal year 2009 as a result of related efforts by federal agencies, as well as cost savings of over $48 million. Two federal organizations play key roles related to green IT:  OMB reviews and approves agency plans and prepares scorecards to track agencies’ progress toward achieving various federal goals and requirements, including those for electronic stewardship.  CEQ, in conjunction with OMB, coordinates federal environmental efforts and works with agencies in the development of policies and initiatives. Federal policy and guidance direct agencies to take a variety of green IT- related actions. Specifically, two executive orders outline broad requirements for green IT as part of a larger sustainability effort. The six agencies in our review have taken steps to implement these green IT- related requirements. However, the benefits of the agencies’ efforts cannot be measured because key performance information is not available. Two executive orders—13423 and 13514—assign responsibility to federal agencies for meeting green IT-related requirements. These requirements, often referred to as electronic stewardship, are part of a much larger effort covered by the executive orders to move federal operations toward environmental sustainability. According to Executive Order 13423 implementing instructions, electronic stewardship seeks to reduce the environmental and energy impacts of electronic product acquisition, operation and maintenance, and disposition through continual improvement for each of these lifecycle phases. In 2007, Executive Order 13423, “Strengthening Federal Environmental, Energy, and Transportation Management,” set goals for federal agencies to improve energy efficiency and reduce greenhouse gas emissions, among others. In addition, Section 2(h) of the executive order contains four broad green IT-related requirements that federal agencies are to follow:  meet at least 95 percent of agencies’ requirements for new electronic products with EPEAT-registered products, unless no applicable EPEAT standard exists;  enable the Energy Star feature on agency computers and monitors;  establish and implement policies to extend the useful life of agency  use environmentally sound practices with respect to disposition of agency electronic equipment that has reached the end of its useful life. To assist the agencies in accomplishing Executive Order 13423 requirements, CEQ provided implementing instructions and directed the agencies to develop Electronic Stewardship Plans. The implementing instructions elaborated on the goals in the executive order and included certain targets that agencies should set for implementing each requirement. Specifically, each agency should  ensure that 95 percent of its product acquisitions are EPEAT  enable the Energy Star feature on 100 percent of its computers and  use in-house agency computers for a minimum of 4 years before identify acceptable partners for electronic recycling. In 2009, Executive Order 13514, “Federal Leadership in Environmental, Energy, and Economic Performance,” expanded on the agency requirements of Executive Order 13423. The executive order required federal agencies to submit to the Chair of CEQ and the Director of OMB a 2020 greenhouse gas pollution reduction target within 90 days and to increase energy efficiency, reduce fleet petroleum consumption, conserve water, reduce waste, support sustainable communities, and leverage federal purchasing power to promote environmentally responsible products and technologies. The executive order requires agencies to meet broad sustainability goals, such as  30 percent reduction in vehicle fleet petroleum use by fiscal year  26 percent improvement in water efficiency by fiscal year 2020; and  50 percent non-hazardous waste diversion by fiscal year 2015. With regard to green IT, section 2(i) of the order contains five broad goals. Three of these are similar to those in Executive Order 13423, but the goals also include requirements related to power management and data center consolidation. Specifically, under this section, agencies are to  ensure procurement preference for EPEAT-registered electronic  establish and implement policies to enable power management, duplex printing, and other energy-efficient or environmentally preferable features on all eligible agency electronic products;  employ environmentally sound practices with respect to the agency’s disposition of all agency excess or surplus equipment;  ensure the procurement of Energy Star and Federal Energy Management Program-designated electronic equipment; and implement best management practices for energy-efficient management of servers and federal data centers. To meet these requirements, Executive Order 13514 assigns the agencies several duties. The order requires agencies to designate a Senior Sustainability Officer, who is accountable for agency conformance to the requirements of the order. The sustainability officer is to develop and implement an annual Strategic Sustainability Performance Plan and monitor the agency’s performance and progress in implementation. The plan is to be updated and submitted annually to the CEQ Chair and the OMB Director. Each sustainability plan is to identify agency goals, a schedule for meeting those goals, milestones, approaches for achieving results, quantifiable metrics for agency implementation, and prioritized actions based on lifecycle return on investment. In addition, both OMB and CEQ have responsibilities and have taken actions related to agencies’ progress in meeting the executive order requirements. Specifically, CEQ reviews and OMB approves each agency’s sustainability plan. CEQ’s duties include preparing, in coordination with OMB, reporting metrics to determine each agency’s progress on the goals of the executive order and establishing interagency working groups that provide recommendations to CEQ for areas of improvement. To assist the agencies in implementing the requirements, OMB and CEQ provided guidance through a template for developing the sustainability plans. According to CEQ officials, this template was not mandatory, but agencies were instructed to justify using a different approach. The six agencies in our review are taking steps toward implementing the green IT-related requirements of the two executive orders. Each agency has designated a senior sustainability officer and submitted its sustainability plan to OMB and CEQ for review. Further, the agencies reported on various initiatives aimed at meeting the requirements. For example, according to officials, EPA donates the majority of its excess electronics to schools, state and local governments, eligible nonprofit organizations, and other federal agencies. Table 1 shows the agencies’ progress in implementing the green IT- related requirements of Executive Order 13423 (issued in 2007). As the table shows, the selected agencies have implemented most of the requirements associated with this executive order. They also all have plans to address the unmet ones. For example, DOC described an action planned to meet the Energy Star requirement and plans to report its progress in its updated sustainability plan. Table 2 shows the agencies’ progress in implementing the green IT- related requirements of Executive Order 13514 (issued in 2009). As table 2 shows, implementation to date of the requirements of this more recent executive order is not as far along. For requirements that have not yet been implemented, all six agencies described plans and efforts to meet them. For example, although none of the agencies have completed the requirement to implement best management practices for energy- efficient management of servers and data centers, they all described plans to do so. While agencies have taken a variety of steps to implement green IT practices, the effectiveness of these efforts cannot be measured because of a lack of performance data. As previously mentioned, Executive Order 13514 requires the agencies to develop, implement, and annually update their sustainability plans to allow them to prioritize agency actions based on the lifecycle return on investment. Among other things, these plans are to (1) identify agency goals, milestones, and quantifiable metrics and (2) identify opportunities for improvement and evaluate performance to determine benefits. Each of the agencies’ 2010 sustainability plans includes planned actions for meeting the requirements of the executive order. These are related to increasing the number of devices covered by Energy Star, improving data center efficiencies, and increasing the use of virtualization and cloud computing. In addition, most of the agencies’ plans contained actions with associated percentage-based targets and milestones for meeting them over several fiscal years. However, the plans do not identify baseline information for the planned actions. A baseline is a starting point for measurement (e.g., an agency’s current energy usage) that provides a basis for measuring progress. Our research has shown that measuring progress toward performance targets requires establishing such baseline information. Without baselines, it will be unclear what progress agencies have made in meeting their targets. In addition, the agency plans do not identify benefits linked to their specific green IT efforts. Specifically, the targets identified in the agencies’ plans are not defined in terms of benefits (such as dollar or energy savings), and as a result the agencies are not positioned to identify benefits from their activities and to use that information to evaluate and prioritize their efforts. For example, USDA had a goal to reduce the number of its data centers by 5 percent during fiscal year 2010. However, it is unclear whether or by how much meeting this 5 percent reduction goal was expected to result in energy or dollar savings or other benefits. The limitations in the data on the effectiveness of agencies’ efforts are due, in part, to challenges related to developing performance information, as well as to the guidance provided to the agencies by OMB and CEQ, which did not include instructions related to collecting this information.  Officials from all six agencies stated that it was challenging to estimate baseline costs and energy use and to quantify the cost savings and reduction of energy consumption resulting from implementation of the executive order requirements. EPA officials stated that one reason for this is that not all of their buildings are sub- metered at the level necessary to capture this information. While we acknowledge that establishing baselines and identifying quantifiable benefits for some agency green IT activities can be challenging, developing such information, where possible, could help agencies better determine their progress toward meeting targets. In addition, the guidance that OMB and CEQ provided to the agencies did not include instructions for developing baselines. CEQ officials told us that, other than the sustainability plan template, they have not issued any further guidance for the plans and do not plan to develop implementing instructions for Executive Order 13514. While CEQ does not plan to issue additional guidance, the agency has been working to develop a national strategy, or road map, for green IT- related initiatives. According to a November 2010 letter, CEQ requested that EPA and GSA join with the council in co-chairing an interagency task force to develop such a strategy. This strategy is to include an action plan to direct federal agencies in achieving requirements related to green IT. As of June 2011, this strategy, which was due on May 6, 2011—180 days after the memorandum was issued—had not been released. If appropriately developed and implemented, such a strategy could provide additional guidance for agencies to measure the effectiveness of green IT efforts. Without specific guidance related to establishing targets and identifying baselines that measure benefits, agencies, OMB, and CEQ will continue to be challenged in determining the actual benefits of green IT efforts. Further, it will continue to be unclear to what extent these efforts are supporting the federal government’s broader sustainability initiatives. In addition to the activities agencies have underway to meet the requirements of the executive orders, we identified a number of leading practices used by government entities and private sector organizations that are relevant to green IT. Among others, these practices include leadership, funding, prioritization, and employee training and involvement: Obtain senior management commitment. Senior management commitment can remove potential obstacles when implementing green IT initiatives and establishing goals. For example, according to a 2009 study of the key drivers of green IT, research showed that identifying an executive sponsor who will champion the green IT initiative will help to remove the road blocks to implementation. Align green IT with the organization’s budget. According to a 2007 industry report on creating a green IT action plan, green IT must fit within an organization’s anticipated budget. In recognition of the importance of adequate funding to program success, the 2009 executive order states that, starting in fiscal year 2011, strategic sustainability efforts, which include electronics stewardship (green IT), should be integrated into the agency’s strategic planning and budgeting process, including the agency’s strategic plan. Evaluate and prioritize green IT options. With various green IT options available, lifecycle return on investment can be a useful tool for determining which options provide the greatest return on investment in an environment of reduced agency budgets. According to a 2009 survey of IT professionals by a national IT services and solutions provider, IT departments may be foregoing large, long-term savings by ranking factors such as cost over energy efficiency in their purchasing decisions. One recommendation from the survey is that organizations need to prioritize their actions based on costs and benefits. Provide appropriate agency personnel with sufficient green IT training. As part of a 2010 private-sector survey of federal chief information officers, industry officials also offered some observations, including that agencies should work with the Office of Personnel Management to improve the IT workforce. The survey noted that, in doing so, government organizations should use existing best practices, such as those found at the Department of Defense, to train employees and develop new leaders. In recognition of the importance of training, the 2007 executive order states that agencies are to establish programs for environmental management training. Implementing instructions associated with that order require that each agency shall ensure that all personnel whose actions are affected by the executive order receive initial awareness training as well as necessary refresher training on the goals of the executive order. Overseas, one British report indicated that increasing the capability of staff will not only improve the performance of overall IT operations, it could also reduce the amount that the public sector spends on IT consultants and contractors by some 50 percent. Procure IT equipment that meets the most stringent EPEAT standard available, if economically practical. As discussed previously, EPEAT is a tool to help purchasers in the public and private sectors evaluate, compare, and select electronic products based on their environmental attributes. EPEAT-registered products must meet 23 required environmental performance criteria. The products are then rated gold, silver, or bronze based on whether the products met 75 percent or greater, 50 percent to 74 percent, or less than 50 percent, respectively, of 28 optional criteria. The three EPEAT level ratings differ to a small, but measurable, extent in their environmental benefits. As we reported in 2009, if federal agencies replaced 500,000 non-EPEAT rated laptop computers and computer monitors with either EPEAT bronze-rated, silver-rated, or gold-rated units, the federal government would achieve energy savings equivalent to 182,796 U.S. households, 183,151 households, or 183,570 households, respectively. In the non-federal government sector, in March 2009 the city of San Francisco upgraded its environmental requirement for IT purchases to the EPEAT gold-level as its procurement baseline whenever possible. Consolidate and standardize IT equipment and services. In an earlier 2011 report, we noted that, because procurement at federal departments and agencies is decentralized, the federal government is not fully leveraging its aggregate buying power to obtain the most advantageous terms and conditions for its procurements. The report also stated that applying strategic sourcing best practices throughout the federal procurement system could produce significant savings. Similarly, according to a 2010 report by a private-sector IT council, the federal government’s costs of operating IT systems are higher than they need to be, in some cases by more than a factor of two. The report estimated that at least 20 percent to 30 percent of the more than $70 billion spent annually on IT assets could be eliminated by reducing overhead, consolidating data centers, eliminating redundant networks, and standardizing applications. Therefore, the report recommended that the federal government consolidate IT infrastructure. In the non-federal sector, the IT council report indicated that IBM had cut its overall IT expenses in half over the past 5 years through consolidation and standardization. In addition, the National Association of State Chief Information Officers (NASCIO) identified consolidation/optimization, through centralizing or consolidating services, operations, resources, infrastructure, and data centers, as its number one priority for 2011. Implement print management actions beyond duplex printing. Using responses obtained from its 2009 survey of federal employees, an IT provider estimated that the federal government spends about $1.3 billion annually on employee printing, and about one-third of that total, or about $440.4 million per year, is spent on unnecessary printing. The survey indicated that 89 percent of federal employees report that their agencies do not have formal printing policies in place—for example, according to federal employees, just 20 percent of agencies have restrictions on color printing; only 11 percent of agencies have policies dictating when to print or not to print; and only 5 percent of agencies require personal password codes to print. The survey further noted that 69 percent of federal employees believe that their agencies’ documentation processes could realistically be converted from paper to digital trails. In the non-federal sector, Hewlett-Packard implemented managed print services that reportedly allowed a customer to reduce the number of printers by 47 percent globally, cut per-page print costs by up to 90 percent, and save more than $3 million in 2 years in the United States alone. In addition, California implemented the Go-Online program as an alternative to mainframe printing, reportedly reducing the number of pages printed by 54 million and reducing costs by $700,000 annually. Utilize new IT tools, such as thin client technology. An alternative to the use of desktops that is gaining attention is the use of thin client technology. A thin client is a computer or computer program that depends heavily on some other computer to fulfill its traditional computational needs. For thin client computers, the applications software, data, and computer processing power reside on a network server rather than on the client computer. The Department of State, by the end of fiscal year 2010, replaced 8,187 standard desktop computers with thin clients, providing annual reported energy savings of 630,399 kilowatt hours and emission savings of 422.7 tons of CO, an environmental impact equivalent to planting 1,900 trees or powering 71 households year round. Initiatives to implement environmentally sound computing practices at federal agencies have the potential to generate savings through reduced energy use and other cost reductions. The agencies in our review, with the assistance of OMB and CEQ, have taken steps to implement green IT-related requirements contained in executive orders. However, even with the potential of green IT, the effectiveness of agencies’ efforts cannot be measured, in part, because OMB and CEQ have not provided specific guidance to assist agencies in establishing baselines and targets that measure energy or cost savings or other quantifiable benefits. Current OMB and CEQ guidance does not provide specificity to help agencies assess their progress in implementing environmentally sustainable IT practices. The national strategy being developed by CEQ, EPA, and GSA could provide the guidance needed for increased measurement of energy or cost savings related to green IT initiatives. In addition, opportunities exist to enhance these efforts through more widespread adoption of leading green IT practices identified by government entities and private-sector organizations. Without specific guidance, agencies, OMB, and CEQ will continue to be challenged in assessing the effectiveness of green IT efforts and the extent to which these efforts are supporting the federal government’s broader sustainability initiatives. To help federal managers better assess the effectiveness of progress made toward green IT-related sustainability goals, we recommend that the Director of the Office of Management and Budget, in conjunction with the White House Council on Environmental Quality, take the following two actions:  update the existing green IT sustainability guidance, through the national strategy or another appropriate method, to direct agencies to develop baselines for their green IT-related goals and, where possible, targets that measure energy or cost savings or other quantifiable benefits and consider including the leading green IT practices identified in this report as part of this guidance. We received e-mail or written responses on a draft of this report from CEQ, OMB, and all six agencies that were included in our review. These comments and our evaluation are summarized below.  The White House Council on Environmental Quality’s Deputy General Counsel provided an e-mail response. In the comments, CEQ partially concurred with our recommendations and also provided technical comments, which we incorporated as appropriate. In response to our recommendations, CEQ agreed to consider including leading green IT practices as part of its update to sustainability guidance. CEQ did not concur that this guidance should direct agencies to develop baselines for their green IT-related goals and, where possible, targets that measure energy or cost savings or other quantifiable benefits. However, as we stated in this report, our research has shown that baselines are needed to measure progress. We also maintain that identifying and tracking benefits resulting from green IT-related efforts is needed to determine their effectiveness.  A representative from OMB’s Office of General Counsel provided an e-mail response and stated that OMB generally concurred with our draft report and also concurred with comments on the draft provided by CEQ.  The Department of Commerce’s Chief Information Officer provided written comments. In the comments, the department concurred with the findings as they apply to the Department of Commerce. These comments are reprinted in appendix II.  The Department of Energy’s Deputy Chief Information Officer provided written comments in which the department agreed with our assessment of DOE’s progress in meeting green IT requirements associated with Executive Orders 13423 and 13514. These comments are reprinted in appendix III.  The Department of Agriculture’s representative from the Program Management Office, Office of the Chief Information Officer, e-mailed comments for GAO’s consideration. We incorporated these comments as appropriate.  The Environmental Protection Agency’s GAO Liaison Team Lead in the Office of Budget provided e-mail comments. The agency offered technical comments, which we incorporated as appropriate.  A General Services Administration management analyst from the Office of the Chief Financial Officer e-mailed a response in which the agency provided no comments.  A representative from the Department of Health and Human Services’ Office of the Assistant Secretary for Legislation provided an e-mail response. The department offered technical comments that we incorporated as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the Chair of the White House Council on Environmental Quality, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact David Powner at (202) 512-9286 or pownerd@gao.gov or Frank Rusco at (202) 512-3841 or ruscof@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) determine the extent to which the federal government has defined policy and guidance on green IT and how selected federal agencies are implementing this guidance and policy and (2) identify leading green IT practices used by federal agencies, state and local governments, and private-sector organizations. To accomplish our first objective, we obtained and evaluated executive orders, and Office of Management and Budget (OMB) instructions on green IT activities. We also interviewed OMB, White House Council on Environmental Quality (CEQ), and selected agency officials about agency requirements for electronic stewardship or green IT. We focused on a nonprobability sample of six agencies: the Departments of Agriculture (USDA), Commerce (DOC), Energy (DOE), and Health and Human Services (HHS); the Environmental Protection Agency (EPA); and the General Services Administration (GSA).  USDA was selected because it developed a strategic plan focused solely on green IT, and the department had implemented a green purchasing program. Further, for fiscal year 2009, USDA IT spending was $2.4 billion.  DOC and HHS were among the top four federal departments in IT spending for fiscal year 2009, spending $3.8 and $5.7 billion, respectively. (The Departments of Defense and Homeland Security were also among the top four agencies in IT spending for fiscal year 2009. They were not selected because we have extensive ongoing work at these departments.)  DOE and EPA have missions and initiatives related to green IT. They jointly operate the Energy Star program. DOE’s Federal Energy Management Program has developed guidance that specifies the conditions for agencies to meet executive order requirements. In addition, EPA has a partnership program, known as the Federal Electronics Challenge, which among other things, encourages federal agencies to purchase green electronic products and manage obsolete electronics in an environmentally safe way.  GSA is the federal government’s supply arm, property manager, and procurement agency. The agency has several ongoing initiatives and has published information on some federal, state and local, and foreign initiatives. At each agency, we focused on the electronic stewardship (i.e., green-IT- related) requirements found in Executive Orders 13514 and 13423. We analyzed internal guidance developed in response to the green IT-related requirements in the orders, and instructions obtained; identified ongoing or planned initiatives; and obtained and analyzed examples of reported or proposed cost savings for these initiatives. In addition, we analyzed the sustainability plans produced by the agencies. As part of this analysis, we used GAO developed criteria on measuring performance. GAO, Executive Guide: Measuring Performance and Demonstrating Results of Information Technology Investments, GAO/AIMD-98-89 (Washington, D.C.: March 1998). standards. Those standards required that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individuals named above, Pamlutricia Greenleaf, Assistant Director; Carlos E. Hazera, Assistant Director; Robert J. Baney; Kami Corbett; Wilfred B. Holloway; Franklin D. Jackson; Lee A. McCracken; Vasiliki Theodoropoulos; Adam Vodraska; and Eric D. Winter made key contributions to this report.
The federal government's substantial use of information technology (IT) contributes significantly to federal agencies' energy use and environmental impact. To help mitigate this impact, organizations have adopted practices for using computing resources in a sustainable and more environmentally friendly manner-- sometimes referred to as "green IT." These practices include equipment acquisition, use, disposal, and related processes. GAO was asked to (1) determine the extent to which the government has defined policy and guidance on green IT and how selected federal agencies are implementing this policy and guidance, and (2) identify leading green IT practices used by federal agencies, state and local governments, and private-sector organizations. To do this, GAO evaluated federal guidance and policy, as well as guidance and initiatives at selected agencies; identified and characterized efforts in the public and private sectors; and interviewed officials. Two executive orders, from 2007 and 2009 respectively, assign responsibility to federal agencies for increasing their environmental sustainability and contain green IT-related requirements. These requirements include acquiring electronic products that meet certain environmental standards, extending the useful life of electronic equipment, implementing power management on computers, and managing federal data centers in a more energy efficient manner. In meeting these and other sustainability requirements, agencies are required to designate senior sustainability officers and develop performance plans that prioritize actions for meeting the requirements in the executive orders. The six agencies in GAO's review (the Departments of Agriculture, Commerce, Energy, and Health and Human Services; the Environmental Protection Agency; and the General Services Administration) have developed sustainability performance plans and taken additional steps to implement the executive orders' requirements. For example, they have increased their acquisition of certified energy-efficient IT equipment, established and implemented policies to extend the useful life of agency equipment, and developed environmental policies for disposing of electronic equipment. However, the overall effectiveness of the agencies' efforts cannot be measured because key performance information is not available. Specifically, the agencies have not identified the information needed to measure the progress or results of their efforts. For example, the agencies have generally not established baselines (starting points) or developed performance targets that are consistently defined in terms of quantifiable benefits, such as a reduction in energy. This is in part because the Office of Management and Budget (OMB) and a key White House council--the Council on Environmental Quality (CEQ)--have not developed specific guidance on establishing performance measures for green IT efforts. Without such guidance, the effectiveness of these efforts and their contribution to overall federal sustainability goals will remain unclear. GAO identified a number of leading practices used by federal, state, and local government and private-sector organizations that are relevant to green IT. These practices include enhanced leadership, dedicated funding, prioritization of efforts, and improved employee training, as well as acquiring IT equipment with the highest energy efficiency ratings; consolidating equipment and services; reducing use of paper; and using new, more efficient computers. For example, according to a 2009 survey of federal employees, agencies spend about $440.4 million per year on unnecessary printing. By contrast, in the non-federal sector, a major IT equipment company implemented managed print services that reportedly reduced the number of printers by 47 percent globally, cut per-page print costs by up to 90 percent, and saved more than $3 million in 2 years in the United States alone. GAO recommends OMB and CEQ develop green IT guidance to help agencies more effectively measure performance and encourage the use of leading practices. In comments on this report, OMB and CEQ partially concurred with the recommendations. They agreed to encourage the use of leading green IT practices but did not agree that additional guidance was needed for measuring performance. GAO continues to believe that additional guidance is needed to help determine the effectiveness of agencies' efforts.
To address our research objectives, we selected a judgmental sample of 15 IDSs that clinically integrate primary, specialty, and acute care and serve uninsured and medically underserved populations. To select our sample, we began by reviewing published research and interviewing researchers with expertise in IDSs. As a result, we identified 44 public and private nonprofit systems from which to select our sample. In December 2009, we sent a Web-based data collection instrument to these systems to determine the extent of their clinical integration and to obtain additional information about organizational features of the system. This included whether the system is made up of subsystems—local or regional delivery systems that are organized below the system level—that integrate clinical care within themselves. We sent e-mail reminders and conducted telephone outreach to systems that had not responded by our requested deadline. In the end, we received completed data collection instruments from 19 systems. We excluded 4 systems from our study because their responses indicated a lack of clinical integration or because of an affiliation with a “closed system”—one that exclusively serves members of the system’s health insurance plan. Our final sample consisted of 15 IDSs, which include five subsystems. (See table 1.) The 15 IDSs vary in many aspects, including the degree to which they are integrated, specific organizational features, and payer mix (e.g., extent to which they serve Medicare and Medicaid beneficiaries and the uninsured) (see app. I). We reviewed the Web sites for the IDSs in our sample, relevant articles and reports about the systems, and other documents the systems provided. Based on our review of the extent of clinical integration of each system, its location (census region and urban/rural), and whether it is publicly or privately owned, we selected for site visits four systems that reflected variation among these dimensions: Ascension Health’s Seton Family of Hospitals, Denver Health, Henry Ford Health System’s Detroit Region, and New York City Health and Hospitals Corporation’s (NYCHHC) Queens Health Network. We administered a structured interview protocol with chief medical officers (or other system officials, as appropriate) to obtain information on organizational features that IDSs use to support strategies to improve patient care; approaches IDSs use to facilitate access to care for underserved populations; and challenges IDSs encounter in providing care, including care provided to underserved populations. To gain additional in-depth information, we conducted interviews with IDS officials at the four sites we visited, including system executives and clinical staff. In some cases, information provided by IDS chief medical officers and officials is specific to underserved populations, and we note it as such in this report. In other cases, the information is more general, relating to the overall system or patient population, which can include underserved patients. Findings in this report are based on a judgmental sample and are not generalizable to all IDSs. The organizational features, patient care strategies, approaches to facilitate access, and challenges that we describe are not necessarily unique to IDSs, but may also be found in other health care settings and experienced by different providers. However, the information we present is from the perspective of the IDSs in our sample. We relied on data obtained through the Web-based data collection instrument, interviews with system representatives, and published studies and did not conduct independent analyses of the effectiveness of strategies. We conducted this performance audit from June 2009 to November 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. IDSs vary in their organizational configuration and in the continuum of services they provide. They frequently use patient care strategies such as care coordination, disease management, and care protocols. Other providers that are not part of IDSs may also use some of these strategies and may also face challenges similar to those IDSs may face. For example, some IDSs serve a patient population that includes a high proportion of underserved individuals and may face financial challenges in doing so. Other providers who also serve a high proportion of underserved individuals may face some of the same financial challenges. IDSs can be organized in different ways and use various staffing models. Some IDSs are a single entity that includes a delivery system (hospitals, physicians, and other providers) and a health insurance plan. Examples of this type of IDS include NYCHHC and Geisinger Health System. Other IDSs include a delivery system but do not have a health insurance plan, such as Partners Healthcare and Memorial Healthcare System. IDSs may employ their own physicians, rely on community-based physicians who are not employed by the system but are granted use of the hospital facilities and staff, or use a combination of those two approaches. An IDS can be organized at the system level, or it can be more decentralized, having subsystems that organize health care at the local or regional level. These subsystems integrate care within themselves but not necessarily with other subsystems in the overall system. IDSs can consist of multiple subsystems. Because there is so much variation in the ways that IDSs can be organized, it is difficult to determine the exact number of IDSs in the country; however, millions of Americans receive care from IDSs. IDSs offer a continuum of services to a particular patient population or community and can vary in what services are provided within this continuum. For example, some IDSs provide nursing home care within their systems, and others do not. Similarly, not all IDSs provide certain specialized services such as organ transplantation or major burn services within their systems. An IDS may have a contract with other providers to offer certain services, or may refer patients to providers not affiliated with the IDS for a service. IDSs use multiple strategies to improve patient care, such as care coordination, disease management, clinical practice guidelines, and care protocols. Care coordination is the integration of patient care activities between two or more participants involved in a patient’s care to facilitate the appropriate delivery of services. It occurs across the continuum of care and across different delivery sites, encompassing both health care and social support interventions, and is often used for patients with special health care needs or chronic health conditions. Care coordination activities can include case management and patient navigation services. Disease management involves providing coordinated health care interventions and communications to patients who have chronic conditions, such as diabetes or asthma, where patients’ self-care efforts can affect their health outcomes. Disease management is a systematic approach to patient care that uses evidence-based practice guidelines. Evidence-based practice guidelines, also referred to as clinical practice guidelines, are systematically developed statements that guide providers and patients in making decisions about appropriate health care for certain conditions. They are typically based on an examination of the best available scientific evidence and broad consensus about the best treatment to follow. Care protocols, which are generally more specific than guidelines, provide more detail about the management and treatment of diseases and conditions. Patient care strategies can be designed to achieve a variety of goals, such as improved health outcomes and quality of care, increased efficiency, and lower costs. They may be performed by physicians, nurses, or other clinical or nonclinical staff members and often are implemented outside of a patient’s face-to-face appointment with a physician. Studies have shown that IDSs are more likely to use patient care strategies than are other providers, such as solo practitioners. For example, a national study of the management of chronic illness for patients with asthma, congestive heart failure, depression, and diabetes found that certain IDSs were significantly more likely to use recommended, evidence- based care management processes than were less organized providers. In addition, a study of physician practices in California in the early 2000s found that physicians affiliated with an IDS were more likely to use disease management programs than were physicians in nonintegrated medical group practices or small practices. Depending on their geographic location and their mission, IDSs serve varying proportions of underserved populations. Individuals who are underserved have higher rates of illness, and they often face barriers to accessing timely and needed care. For example, uninsured patients are more likely than insured patients to have chronic illnesses that are undiagnosed or undertreated. People with limited English proficiency may have problems comprehending health care information and complying with treatment. Rural residents also face barriers to access because of physician shortages in rural areas. In addition, underserved patients may have difficulty obtaining specialty services, including diagnostic services. Integrating care, such as by linking primary and specialty care, can reduce some of the access barriers that underserved populations experience. The 15 IDSs in our sample collectively reported that organizational features such as using electronic health records (EHR), operating health insurance plans, and employing physicians all support various strategies for improving patient care, including care coordination, disease management, and use of care protocols. Officials at some IDSs in our sample told us that using EHRs supports their strategies to improve the quality of patient care by increasing the availability of clinical information and patient population data and by improving communication. All 15 IDSs reported having implemented EHRs to some extent. For example, as of March 2010, Seton was in the process of implementing its EHR, and Henry Ford’s EHR was available at all of its facilities. Clinical strategies supported by using EHRs include care coordination, disease management, electronic prescribing (e-prescribing) and computerized physician order entry (CPOE), and care protocols. According to officials at some IDSs, using EHRs facilitates care coordination because EHRs make patient clinical information more readily available to providers and improve communication among providers, staff, and patients. For example, officials from Denver Health characterized the EHR as a key component of integration. At Denver Health, the EHR supports care coordination because physician notes from patient encounters are scanned in within 24 hours of patient contact and clinical information, such as previous laboratory tests, is available to all providers (for additional information, see sidebar). Similarly, an official from Mayo Clinic told us that the EHR helps avoid overutilization and duplication of services, and an official from Partners Healthcare told us that the EHR aids in care coordination because physicians can see patient clinical information no matter where in the system the patient is. Marshfield Clinic’s EHR is also available at all of its facilities, giving providers access to clinical information, digital radiology images, and capabilities such as e-prescribing. At Marshfield Clinic, each patient’s EHR contains a “dashboard” with information on preventive services to highlight needed services and to facilitate communication among providers so that services and assistance can be requested electronically. Marshfield’s EHR also creates a list of high-risk patients with outcomes in need of interventions so that physicians and other staff can follow up with those patients. According to IDS officials, using EHRs facilitates disease management by making patient-level and population-level data available to providers, which allows providers and IDSs to adjust approaches to treatment based on individual patient and population-wide progress. NYCHHC, for example, has disease management programs for patients with asthma, diabetes, congestive heart failure, hypertension, cardiovascular disease, and depression. Each regional subsystem within NYCHHC has its own separate EHR. The EHRs update disease registries nightly, and through the disease registries, providers can develop a comprehensive understanding of a patient over time. For example, providers can assess a given diabetic patient’s health status at any point in time, and compare it to another point in time to ascertain what may have been associated with a change in health status. The diabetic disease registry also enables NYCHHC physicians with similar groups of patients to compare their patients’ outcomes. NYCHHC officials said that information technology makes it easier to get data and identify trends, and that EHRs allow them to anticipate and mitigate potential problems more easily and much earlier. Because the EHR provides real-time clinical information, providers are able to see test results immediately upon completion, which might not be possible without an EHR. Having real-time information allows providers to initiate appropriate treatment or follow-up. Similarly, the Doc Site Registry at Henry Ford, which uses a common EHR across all its facilities, is a disease registry program available for all patients that is linked to the EHR. It covers diseases such as depression, chronic obstructive pulmonary disease, hypertension, asthma, and chronic kidney disease. The Doc Site Registry prompts providers to administer missing preventive services during patients’ visits. Staff at Henry Ford use the Doc Site Registry to identify groups of patients who are in need of care management. In addition, Hennepin Healthcare System has used an EHR system since February 2007 for both inpatient and outpatient services. According to officials at Hennepin Healthcare System, the EHR is fundamental to providing real-time awareness and support for providing clinical care, including care for patients with chronic diseases. One of the officials added that using data from the EHR enables them to determine which interventions are more effective in specific circumstances and gives the staff insight into how to improve care. Officials at some of the IDSs reported that using EHRs with e-prescribing and CPOE capabilities reduces errors and lowers costs. For example, e-prescribing at Marshfield Clinic was reported to reduce errors related to illegible handwriting and unintentional drug interactions. Through the EHR, prescribers are alerted when an allergy or drug interaction exists. Marshfield Clinic’s EHR also requires physicians to consider appropriate alternatives for prescription drugs, and a study found that Marshfield Clinic’s suggestions of “preferred alternative” prescription drugs saved payers and patients $2.5 million in 1 year. In addition, NYCHHC officials told us that its CPOE includes drug interaction warnings and improves the legibility of physician orders. Officials from some IDSs told us that their systems’ EHRs facilitate the use of care protocols and clinical practice guidelines by prompting providers to use them and tracking their use. At Denver Health, for example, when entering an order into the CPOE through the EHR, the physician is presented with a standard set of orders that is applicable to the patient. The sets of orders are linked with guidelines explaining the need for the specific orders, and physicians must explicitly de-select any orders they disagree with. A Denver Health official told us that guidelines incorporated into the EHR’s CPOE function are more likely to be followed than standalone guidelines. One example of the use of standardized order sets is for Denver Health patients with ketoacidosis; this use of standardized order sets was associated with a 23 percent decrease in intensive care unit length of stay and a 30 percent decrease in hospital length of stay. In another example, Allina Hospitals & Clinics, which uses a single EHR system at all 11 of its hospitals and for all of its employed physicians, created systemwide pneumonia vaccine guidelines to better identify patients eligible for the vaccine. Allina Hospitals & Clinics’ EHR electronically prompts the nurse to use the guidelines at the time the patient is assessed for admission. Officials from IDSs in our sample reported that operating a health insurance plan can support patient care strategies by providing to the IDS both financial resources, such as savings resulting from reducing avoidable hospitalizations for health plan members, and data on health insurance plan members. IDS officials reported that financial resources could be used to fund services such as care coordination—which many insurers do not reimburse—and the data could be used as a basis for implementing strategies such as disease management. A Geisinger Health System official discussed how operating a health insurance plan could enable an IDS to invest financial resources in coordinating care for patients insured by the plan. The official said one way that the Geisinger Health Plan provides care coordination is through its ProvenHealth Navigator program. Geisinger Health Plan hires nurses trained in population health management to work in primary care settings, where they provide specialized care coordination and preventive services for the plan’s high-risk patients. According to the Geisinger Health System official, the ProvenHealth Navigator program is associated with reductions of up to 30 percent in hospital readmissions and about 20 percent in acute admissions. Because Geisinger Health System hospitals have fewer admissions, Geisinger Health System revenues from hospital care are reduced. However, for the overall system, the reduced revenue has been offset by savings the health insurance plan experiences because it is paying for fewer hospital admissions for its members. Furthermore, patients have benefited from avoiding preventable hospital stays. Officials at some IDSs provided us with examples of ways that operating a health insurance plan enables them to allocate resources for disease management services or enables them to create better-informed disease management programs by providing access to useful patient information through the tracking of health insurance plan data. Henry Ford implemented an innovative protocol for use of an outpatient heparin treatment in place of an inpatient heparin treatment for patients with deep vein thrombosis before outpatient heparin was the standard of care. The type of heparin used in the outpatient treatment was not covered by most insurers. Because Henry Ford controls its own insurance benefit through its health insurance plan, the Health Alliance Plan, it was able to cover the cost of the outpatient heparin, which was associated with a decreased length of stay, as well as a decreased cost per admission. Henry Ford also uses its health insurance plan claims data to better inform patient care. For example, Henry Ford has access to over 10 years of data on patients with osteoporosis, allowing it to know how patients were treated and what the outcomes of those treatments were, which can guide future efforts to manage treatment of osteoporosis. Similarly, an official from Intermountain Healthcare told us that it used its health insurance plan data to identify patients with conditions such as hypertension and diabetes and conduct targeted education for those patients through mailings and other initiatives. Employing physicians, rather than relying solely on community-based physicians who are not employed by the system, may facilitate strategies to improve the quality of patient care at an IDS, in part because of the IDS’s ability to require or encourage certain aspects of care and to monitor certain aspects of the care employed physicians provide. At each of the 15 IDSs in our sample, some physicians are employed by the IDS. Strategies supported by the employment of physicians include accountability for quality of care, use of care protocols, and mitigation of physician concerns related to payment for care for underserved populations. Employment of physicians was reported to facilitate physician accountability for quality of care because physicians who are employed by the IDS are expected to meet certain performance indicators, and the IDSs collect data on and review physician performance. For example, an official from Memorial Healthcare System said that employed physicians are expected to comply with performance indicators, but that Memorial Healthcare System does not have the same leverage with community- based physicians it does not employ as it has with the physicians it employs. The Memorial Healthcare System official told us the system can compare an employed physician’s data with those of its other employed physicians and with performance benchmarks, and that such data comparisons help motivate physicians to improve their performance. A Denver Health official told us that the employment of physicians is an important part of implementing physician accountability and quality of care, and that physicians that the system employs are more likely to fully support hospital initiatives than are community-based physicians who are not employed by the system. An official from Intermountain Healthcare said that physicians are motivated to improve performance when presented with a comparison of individual performance indicators and peer performance indicators, and Intermountain Healthcare is able to provide more detailed information on physicians it employs because of the employed physicians’ use of the EHR. Officials from some IDSs told us that employment of physicians can increase adherence to care protocols, because IDSs can require or encourage their use. For example, Geisinger has a pay-for-performance program for providers of coronary artery bypass graft surgery for Geisinger patients. Because Geisinger employs these surgeons, it can require them to follow 40 care protocols through its ProvenCare Heart Program. In a 2007 study, adverse outcomes occurred less often in the ProvenCare treatment group than the control group, the latter of which consisted of patients treated before the implementation of ProvenCare, and the likelihood of the patients being discharged to their home rather than to another care facility was higher. In addition, an official from Memorial Healthcare System told us that employed physicians are expected to follow protocols for chronic conditions. Memorial Healthcare System can track whether employed physicians—in both inpatient and outpatient settings—are using the protocols, and employed physicians receive feedback on their compliance with protocols. Memorial Healthcare System can track use of protocols for other physicians who provide inpatient services, but cannot track use of the protocols for other physicians who provide outpatient services. Officials from several IDSs told us that employment of physicians can facilitate provision of care to underserved populations because compensation from IDSs can mitigate physicians’ concerns that they may not receive payment from uninsured patients. For example, an official from Intermountain Healthcare told us that physicians receive the same compensation regardless of the patient’s insurance status. At Henry Ford Hospital, where the Henry Ford Medical Group is the sole source of physician staffing, the physicians are expected to provide the same standard care processes, which are evidence-based, without considering the patient’s insurance status, and often physicians do not know what the patient’s insurance status is. Similarly, the Sisters of Mercy Health System’s St. Edward Mercy Health System has a set compensation structure for newly hired primary care physicians for at least 2 years under which there is no financial incentive for them to distinguish among patients of different insurance status. After 2 years, primary care physicians are asked to take at least 10 percent of their patients from Medicare or Medicaid populations, and they are reimbursed at a rate similar to that of commercial insurance for patients from those populations that exceed the 10 percent level (up to 20 percent). The IDSs in our sample discussed several approaches that they use to facilitate access to care for underserved populations. These include providing community-based care, conducting outreach, helping patients apply for coverage programs, providing financial assistance, integrating mental health and primary care services, and collaborating with community organizations. Officials from some IDSs reported providing underserved children with some of the health care services along their continuum of care—such as primary care, mental health care and counseling, and health education— through school-based health centers (SBHC). Examples of IDSs with school-based health centers serving underserved children include Denver Health, Henry Ford, Intermountain, NYCHHC, and Parkland Health & Hospital System. Henry Ford’s SBHCs provide management of chronic illnesses, such as asthma and diabetes, and mental health counseling and referral in addition to their primary care services. Intermountain’s SBHCs have expanded access to their health services to family members of the children they serve. Federally Qualified Health Centers (FQHC) Officials from several IDSs reported either operating or collaborating with FQHCs to provide care to underserved populations. Some IDSs, such as Denver Health and Parkland, operate FQHCs within their systems. All of Denver Health’s 12 school-based health centers are FQHCs, as are all 8 of its primary care clinics, its urgent care center, and its hospital-based women’s care clinic. Similarly, Parkland’s Homeless Outreach Medical Services, which operates mobile health units in partnership with the City of Dallas, is an FQHC. At two other IDSs—Seton and Henry Ford—there are no FQHCs among the clinics in their system, but both IDSs collaborate with local FQHCs that are not part of their system. For example, Henry Ford collaborates with two local FQHCs, facilitating access to primary and specialty health care services. The FQHCs provide primary care services to patients, and Henry Ford provides needed specialty care services. In addition, Henry Ford collaborates more broadly with one of the FQHCs, providing resources to help meet the clinic’s needs (for additional information, see sidebar). Another IDS, Marshfield Clinic, has a contractual partnership with an outside FQHC through which Marshfield Clinic provides primary and preventive health care and dental care to low- income uninsured and underinsured individuals and families. Marshfield Clinic also supported the establishment of the FQHC by helping it apply for federal grant funding. Some IDSs operate mobile health units to expand access to care for underserved populations, such as people who are homeless and residents of rural areas. For example, Parkland’s Homeless Outreach Medical Services mobile health units visit Dallas area homeless shelters to provide medical and social work services to children and adults. Services include immunizations, care for acute and chronic conditions, health education, and well-child care. To facilitate access to health care services for patients in rural communities, especially those who are uninsured, Seton operates a mobile mammography program and a mobile pediatric clinic. The mobile mammography program provides free mammography screening, breast self-examination instruction, and a clinical breast examination as well as eligibility screening for available public and Seton-sponsored health coverage. A nurse provides case management services for women screened through the mobile mammography program. The mobile pediatric clinic serves children through age 21, providing services such as well-child care, immunizations, and chronic disease management. Some IDSs, such as Marshfield Clinic, St. Edward Mercy Health System, and Geisinger, facilitate access to certain health care services to patients in rural areas by using telehealth to provide services such as primary care, mental health care, and certain specialty services. Telehealth enables providers to interact remotely with patients and other providers by using electronic communication and technologies such as video conferencing, bringing a wider range of services to underserved individuals in their communities. For example, Marshfield Clinic telehealth services are available in 40 medical specialties at 55 sites, including dental clinics, skilled nursing facilities, Head Start clinics, a rural hospital, and 31 rural clinics, 5 of which are FQHCs. Telehealth is available in care areas such as mental health, dentistry, and primary care. Through telehealth, Marshfield Clinic specialists and primary care providers consult with each other and outside referring physicians, and Marshfield Clinic patients can receive services from other specialists located in academic medical and research centers throughout the country. St. Edward Mercy Health System facilitates access to health care services for pregnant women and newborns in rural communities through its participation in the perinatal telehealth program of the University of Arkansas for Medical Sciences and Arkansas Children’s Hospital. The program links physicians at St. Edward Mercy Medical Center with a neonatalogist or an obstetrician who specializes in high-risk pregnancies for consultation. To facilitate access to psychiatric services for veterans living in rural communities, Geisinger incorporates telemedicine into its Reaching Rural Veterans program. The program uses telehealth and a patient navigator to identify and assist veterans who have post-traumatic stress disorder and their families and connect them to local private and public resources. Most IDSs in our sample conduct outreach targeted at underserved populations. IDSs engage in outreach activities such as health education, health screening, and linking individuals with providers for needed health care services. For example, Denver Health conducts outreach to underserved men in targeted neighborhoods and at the Denver County jail through its Men’s Health Initiative. The Men’s Health Initiative provides basic health screening; case management services, including services for men with complex health care needs; and referrals for specialty health care services. At Cambridge Health Alliance, volunteer health advisors work in the community to conduct health education and screening, participate in health fairs, provide referrals for services, and lead culturally and linguistically appropriate peer support groups such as those for patients with chronic conditions. According to Cambridge Health Alliance, since 2001 the volunteer health advisor program has provided 8,100 screenings, and more than 700 individuals have been enrolled in health coverage and referred to a primary care doctor. All IDSs in our sample facilitate access to care for uninsured patients by helping them complete applications for public coverage such as Medicaid and local coverage programs. At some IDSs, application assistance is a component of community outreach activities, such as in Denver Health’s Men’s Health Initiative. Two systems—Seton and Parkland—use a Web- based tool to screen for eligibility for federal, state, and local health insurance programs. According to Seton representatives, using the Web- based tool enables the system to adopt a “no wrong door” approach, screening patients for eligibility regardless of where the patient enters the system. With the Web-based tool, Seton can track whether patients submitted applications, remind patients to do so, and track their enrollment status. Parkland uses the tool, which screens for eligibility for about 100 programs, at its main campus and all of its community health clinics, school-based health centers, and other locations. Many IDSs in our sample also provide financial assistance, such as a sliding fee scale, for health care services to patients who are uninsured and do not qualify for public health insurance programs. For example, NYCHHC operates the HHC Options program, through which individuals who are uninsured or underinsured and meet income requirements pay a fee based on income and family size for health care services. Officials from another IDS, Seton, told us that it operates a “health insurance-like” program known as Seton Care Plus, through which uninsured individuals who meet income requirements can access health care services. Seton Care Plus enrollees pay a fee based on income for primary care services provided at Seton primary care clinics and receive discounts for specialty services from community specialists who have agreed to provide such services. According to officials from Seton, although Seton Care Plus is not insurance, it is similar in some ways, such as in its requirement for prior authorization for certain services and in its tracking and monitoring of the use of medical services. (For additional information on HHC Options and Seton Care Plus, see sidebar.) To improve access to mental health care services for patients, including for underserved populations, some IDSs integrate mental health and primary care services by providing mental health screenings in primary care locations or collocating mental health providers in primary care settings. For example, the NYCHHC Queens Health Network, which serves a high proportion of patients who are uninsured or have Medicaid coverage, annually screens all adult patients with diabetes for depression in primary care settings. The primary care physicians treat patients with mild to moderately severe depression, and patients needing more specialized care are referred to the mental health clinic. Similarly, Henry Ford conducts depression screening for patients with chronic conditions. Henry Ford implemented a two-step screening process—which is embedded in the EHR—in its primary care clinics. The patient is first screened using a two-item screening questionnaire, and if that screening indicates a need, the patient completes a second, more extensive depression screening. The EHR uses the patient’s responses to notify the primary care provider if treatment for depression is required and provides the evidence-based treatment protocol. Henry Ford reported that in a 12-month period from June 2007 to June 2008, its primary care doctors were providing treatment for depression to 67 percent of the patients they identified through screening. In addition, two IDSs with facilities in Minnesota, Mayo Clinic and Allina, participate in a collaborative care model in which primary care providers screen and treat adult patients with depression. Primary care providers use a standardized questionnaire to assess symptoms of depression, a tracking system to monitor patient status, a medical guide for identifying appropriate treatment, care coordination for patients, a psychiatrist who is available for consultations, and tools for preventing relapses by patients in clinical remission. Another way some IDSs facilitate access to mental health care services is by collocating providers such as social workers, nurses, and psychiatrists in primary care settings. For example, Denver Health collocates some mental health providers in community health clinics and school-based health centers. Some of the community health clinics have a limited number of mental health providers on site, as well as a psychologist or psychiatrist. Staff at the school-based health centers include master’s level mental health clinicians and child psychiatrists for consultations as needed. Another example of an IDS that collocates primary care and mental health care is NYCHHC, where most mental health clinics are collocated with primary care clinic locations. Therefore, patients needing both primary care and mental health services can obtain those services in one location. IDS officials told us that improving access to mental health care could have a beneficial effect on a patient’s physical health. A Denver Health official noted that patients with unmet mental health care needs could face difficulty adhering to medical care treatment plans. Similarly, a staff member at Seton’s community clinic commented that diabetic patients who are depressed and therefore not taking care of themselves often cannot manage the disease appropriately. Officials from most of the IDSs in our sample reported collaborating with community organizations to facilitate access to care for underserved populations. In these collaborative efforts, IDSs work with organizations such as other providers and faith-based organizations, sometimes providing financial resources or directly providing patient care, referrals, screening services, or health education. For example, Seton collaborates with other local organizations through the Health Alliance for Austin Musicians (HAAM) to provide physical and mental health care services to low-income, uninsured musicians in Austin, Texas. Seton provides primary care services through Seton Care Plus to HAAM members, while other community organizations offer mental health, dental, and audiology services. Although HAAM members obtain mental health care services through a HAAM mental health provider, they obtain medications prescribed by that provider through Seton Care Plus, which gives them access to low-cost prescription drugs. Hennepin Healthcare System collaborates with other providers in its community on a pilot program to help patients who sought nonemergency care in the emergency department to find a primary care home. Some other IDSs collaborate with health clinics in their communities by providing in-kind and financial resources. For example, Intermountain and Geisinger provide financial assistance to local health clinics. St. Edward Mercy Health System provides office space to a local community organization that provides social services and assistance to children who have been abused, and a St. Edward Mercy Health System physician serves as medical director for the organization. Some of the systems in our sample work with local faith-based organizations. For example, Henry Ford works with 15 to 20 local churches to offer health education and screening related to issues such as nutrition, cancer, and heart disease. Memorial Healthcare System and Parkland also collaborate with area churches to conduct outreach, provide some health care services, or provide health education and screening. IDSs in our sample reported facing various operational challenges in providing care, including care for underserved populations. Some reported that not receiving reimbursement from health care insurance companies for the care coordination services they provide to patients is a financial challenge. Other operational challenges IDSs reported included finding specialty care for underserved patients, including mental health care; sharing information with providers outside the system; and changing management and physician cultures to adapt to organizational change. Officials from some IDSs in our sample said that not receiving health insurance reimbursement for the care coordination services they provide is a financial challenge. While all the IDSs in our study provide these services as a patient care strategy, such services are generally not covered by health insurance. For example, Cambridge Health Alliance provides patient navigation and care management services but does not receive reimbursement for those services from health insurance companies. Cambridge Health Alliance said that because these services are necessary for treating certain patients, including those with mental illness, it continued to provide the services without receiving payment for them. Similarly, Henry Ford operates a pediatric medical home program that includes care coordination services, but does not receive health insurance reimbursement for these services. Allina cannot bill for services provided by its nonclinical care guides, whose services are part of a broader care coordination strategy. The care guides are trained to provide one-on-one counseling and patient navigation services to patients diagnosed with a chronic disease to help them meet their clinical goals. Allina told us that the Care Guide program increased the number of clinical goals that were met for participating patients and decreased inpatient care costs. An Allina official also commented that because Allina provides these services without receiving health insurance reimbursement and it does not operate a health insurance plan, it cannot recoup the savings that may result from care coordination, such as the reduced need for services for preventable events. Some IDS officials said that finding specialty care, including mental health care, for their underserved patients has presented challenges. The challenges that IDSs may face in finding mental health care providers include recruiting and funding providers to practice in the system and identifying providers in the community to accept referrals of underserved patients. For example, an official from Marshfield Clinic, which serves a rural population, told us the system has difficulty recruiting mental health care providers for its patient population, including Medicare and Medicaid beneficiaries. As a result, the mental health care providers at Marshfield Clinic have a large patient caseload and find it difficult to spend time collaborating with other types of providers to integrate care. Like Marshfield, Seton experiences difficulty recruiting psychiatrists to practice at one of its rural facilities. Seton officials also told us that while previously a psychiatrist worked in one of the system’s urban primary care clinics, the clinic could not sustain the funding for the position. Clinic staff said they are able to consult with a psychiatrist only on a limited basis, and that there are many patients at the clinic who have serious mental illnesses but cannot access the care they need. In addition, Seton officials told us it is challenging to find psychiatrists in the community to accept referrals for the system’s Medicaid and uninsured patients. Officials from some IDSs also told us that, in certain circumstances, they face challenges when seeking other types of outside specialty care for their underserved patients. For example, Sisters of Mercy refers patients to practices outside its system for certain specialty care that is not available within its system. However, it sometimes encounters problems finding outside specialty providers to treat its uninsured patients. Seton has also experienced this challenge. To fulfill its mission to provide health care to underserved populations, Seton provides primary care services to uninsured patients through its community clinics. It also participates in a program through which it recruits specialists from the community who agree to see a certain number of uninsured patients, but it has experienced difficulty finding specialists to participate in this program and provide specialty care to Seton patients. Some IDS officials described challenges related to sharing the clinical information in patients’ EHRs with providers outside of their systems. To improve availability of patient clinical information, some IDSs make their EHRs available to outside providers that also treat their patients. Of those IDSs, some reported that while the outside providers can read the EHRs, they cannot directly enter clinical information. For example, Geisinger makes its EHR available to outside providers to give them immediate access to patient medical records, but the providers are not able to enter any additional clinical information directly into the EHR. Geisinger can scan this information into the EHR if the outside provider communicates it, but scanning is not instantaneous, as direct entry would be. A Geisinger official noted that while scanning is not optimal, absent a common EHR where information can be directly entered, the scanned records can be a helpful supplement to a patient’s EHR in many cases. Similarly, providers serving patients at the FQHCs that Henry Ford partners with can only view information in the EHR and cannot directly enter clinical information themselves. While the Geisinger and Henry Ford EHRs are available as read-only for outside providers, the Denver Health EHR cannot be viewed by mental health care providers outside of its system. A Denver Health official told us that, with the exception of enrollees in Denver Health’s managed care plans, Denver Health’s patients with severe and persistent mental illness receive their outpatient mental health care from a community mental health center in Denver that uses an EHR that is separate from Denver Health’s EHR. As a result, Denver Health providers, such as those working in the psychiatric emergency department, do not have immediate access to information about the care their patients received at the community mental health center, which can affect patient care. However, Denver Health providers do have access to an electronic database with the prescription history of patients who visit the community mental health center. Officials from some IDSs told us about challenges they faced as their systems have evolved, in particular, difficulty in changing management and physician cultures and in implementing an EHR. For example, as Allina centralized its supervision of clinical care, it redefined the role of its clinical directors, who used to manage groups of clinical services at individual facilities. When the system transferred supervision of the clinical services from the individual facilities to the central system, the facility clinical directors were concerned that they would no longer have a role in clinical management and would be responsible for administrative functions only. The operational challenge was different at Intermountain, where implementing care protocols required a change in the physician culture because of the generally independent nature of physicians and their concern that Intermountain was trying to “tell them what to do.” The system has been making efforts to motivate physicians to use Intermountain’s voluntary care protocols for about 10 years. To do so, Intermountain uses physician-level data to show how individual physicians are performing relative to their colleagues. Intermountain has more buy-in from physicians now than in the past, but officials there said changing the culture continues to be a challenge. Some IDS officials said that implementing an EHR system is financially and operationally challenging. For example, NYCHHC currently has eight separate EHRs that were developed and customized at the subsystem level, and it has been financially and operationally challenging to consolidate them into an interoperable system. Because the EHRs were developed at the subsystem level, clinical data can be shared within each subsystem, but NYCHHC does not yet have a system that can seamlessly transfer clinical data across the entire system. According to senior NYCHHC officials, consolidating the regional EHRs, as NYCHHC is currently doing for its electronic blood bank registry, is part of the system’s strategic plan. Henry Ford has also encountered challenges in implementing its EHR system. Because of cost and connectivity issues, its school-based health centers do not currently have access to the system’s EHRs. A Henry Ford official said that the school-based health centers have competing priorities for funds provided by Henry Ford, including staffing needs, and that Henry Ford has not been able to pay for the implementation of the EHR at the school-based health centers but plans to do so eventually. We are sending a copy of this report to the Secretary of Health and Human Services and the Administrator of the Health Resources and Services Administration. The report also is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-7114 or bascettac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Payer mix (percentage of patient population) Uninsured Medicaid Medicare Private Other 2 Some IDSs are organized into multiple subsystems—local or regional delivery systems that are organized below the system level—that integrate clinical care within themselves. For IDSs organized at the subsystem level, this table identifies the subsystem we studied for this report and gives the system name in parentheses. In addition to the contact named above, Helene F. Toiv, Assistant Director; Anne Dievler; Joanne Jee; Martha R.W. Kelly; Mariel Lifshitz; Kate Nast; Roseanne Price; Janet L. Sparks; Margaret J. Weber; and Jennifer Whitworth made key contributions to this report.
Health care delivery in the United States often lacks coordination and communication across providers and settings. This fragmentation can lead to poor quality of care, medical errors, and higher costs. Providers have formed integrated delivery systems (IDS) to improve efficiency, quality, and access. The Health Care Safety Net Act of 2008 directed GAO to report on IDSs that serve underserved populations--those that are uninsured or medically underserved (i.e., facing economic, geographic, cultural, or linguistic barriers to care, including Medicaid enrollees and rural populations). In October 2009, GAO provided an oral briefing. In this follow-on report, GAO describes (1) organizational features IDSs use to support strategies to improve care; (2) approaches IDSs use to facilitate access for underserved populations; and (3) challenges IDSs encounter in providing care, including to underserved populations. GAO selected a judgmental sample of 15 private and public IDSs that are clinically integrated across primary, specialty, and acute care; they vary in their degree of integration, specific organizational features, and payer mix (e.g., extent to which they serve Medicare and Medicaid beneficiaries and the uninsured). GAO interviewed chief medical officers or other system officials at all 15 IDSs and conducted site visits at 4 IDSs, interviewing system executives and clinical staff. IDSs in GAO's sample reported that using electronic health records (EHR), operating health insurance plans, and employing physicians all support strategies to improve patient care. An EHR contains patient and care information, such as progress notes and medications. Some IDSs said that using EHRs supports their patient care strategies such as care coordination, disease management, and use of care protocols by increasing the availability of individual patient and patient population data and by improving communication among providers. IDSs also reported that operating a health insurance plan can support patient care strategies by providing to the IDS both financial resources, such as savings from reducing avoidable hospitalizations for health insurance plan members, and data on plan members. For example, financial resources could be used to fund services such as care coordination--which many insurers do not reimburse--and the data could assist with strategies such as disease management. Employment of physicians was reported to facilitate physician accountability for quality of care because physicians who are employed by the IDS must meet certain performance indicators, and the IDSs collect data on and review physician performance. Employment of physicians was also reported to increase adherence to care protocols and to facilitate provision of care to underserved populations through compensation that mitigates physicians' concerns that they might not receive payment from uninsured patients. IDSs in the sample discussed several approaches they use to facilitate access to care for underserved populations. These approaches include using community-based settings, such as school-based health centers and federally qualified health centers (FQHC); conducting outreach; helping patients apply for coverage programs such as Medicaid; providing financial assistance; and collaborating with community organizations, including faith-based organizations. For example, some IDSs operate FQHCs within their system, and others collaborate with local FQHCs that are not part of their system. In addition, to improve access to mental health care services for patients, including those in underserved populations, some IDSs integrate mental health and primary care services. IDSs in the sample reported facing various operational challenges in providing care, including care for underserved populations. Some reported that not receiving reimbursement from health care insurance companies for the care coordination services they provide to patients is a financial challenge. Other operational challenges IDSs identified included finding specialty care for underserved patients, including mental health care; sharing clinical information in patients' EHRs with providers outside the system; and changing management and physician cultures to adapt to organizational change. The Department of Health and Human Services reviewed a draft of this report and provided technical comments, which GAO incorporated as appropriate.
We reported in 2008 that DB plan investments in hedge funds and private equity have grown, but such investments are generally a small portion of plan assets. This remains the case today. According to a Pensions & Investments survey, the percentage of large plans (as measured by total plan assets) investing in hedge funds grew from 11 percent in 2001 to 60 percent in 2010 (see fig. 1). Over the same time period, the percentage of large plans that invest in private equity grew at a much slower rate—71 percent to 92 percent—likely because of the fact that a much larger percentage of plans were already invested in private equity in 2001. Data from the same survey reveal that investments in hedge funds and private equity typically constitute a small share of plan assets. The average allocation to hedge funds among plans with such investments was a little over 5 percent in 2010. Similarly, among plans with investments in private equity, the average allocation was a little over 9 percent. Although the majority of plans with investments in hedge funds or private equity have small allocations to these assets, a few plans have relatively large allocations, according to the Pensions & Investments survey. Of the 78 large plans that reported hedge fund investments in 2010, 20 had allocations of 10 percent or more (see fig. 2). The highest reported hedge fund allocation was 33 percent of total assets. Similarly, of the 121 plans that reported private equity investments in 2010, 34 had allocations of 10 percent or more, and the highest reported private equity allocation was 30 percent. Available survey data show that larger plans, measured by total plan assets, are more likely to invest in hedge funds and private equity compared with midsize plans. As shown below, a 2010 survey by Greenwich Associates found that 22 percent of midsize plans—those with $250 million to $500 million in total assets—were invested in hedge funds compared with 40 percent of the largest plans—those with over $5 billion in total assets (see fig. 3). Survey data on plans with less than $200 million in assets are unavailable and, in the absence of this information, the extent to which these smaller plans invest in hedge funds and private equity is unclear. One of the major challenges that both hedge fund and private equity investments pose to plan sponsors is uncertainty over the current value of the sponsors’ investment. With regard to hedge funds, we noted that plan officials may lack information on both the nature of the specific underlying holdings of the hedge fund, as well as the aggregate value on a day-to- day basis. Because many hedge funds may own thinly traded securities and derivatives whose valuation can be complex and subjective, a plan official may not be able to obtain timely information on the value of assets owned by a hedge fund. Further, hedge fund managers may decline to disclose information on asset holdings and the net value of individual assets largely because the release of such information could compromise their trading strategy. In addition, even if hedge fund managers were to provide detailed positions, these managers may seek to profit through complex and simultaneous positions and can abruptly change their positions and trading tactics in order to achieve a desired return as changing market conditions warrant, making it difficult for plans to independently ascertain the value or fully assess the degree of investment risk. Although we noted in January 2008 that some hedge funds have improved disclosure and transparency about their operations because of the demands of institutional investors, several pension plans cited limited transparency as a prime reason they had chosen not to invest in hedge funds. As with hedge funds, valuations of private equity investments are uncertain during the investment’s long duration, which often lasts 10 years or more. Unlike investments that are traded and priced in public markets, plan officials have limited information on the value of private equity investments until the underlying holdings are sold. In some cases, private equity funds estimate the value of the fund by comparing the value of companies in their portfolio with the value of comparable publicly traded assets. However, prior to the sale of underlying investments, assessing the value of a private equity fund is difficult. While any plan investment may fail to deliver expected returns over time, hedge fund and private equity investments pose investment challenges beyond those posed by traditional investments. For example, both hedge fund and private equity managers may use leverage—that is, borrowed money or other techniques—to potentially increase an investment’s return without increasing the amount of capital invested. Although registered investment companies are subject to strict leverage limits, a hedge fund or private equity fund can make relatively unrestricted use of leverage. Leverage can magnify profits, but can also magnify losses to the fund if the market goes against the fund’s expectations. In addition, a private equity fund manager’s strategy typically involves concentrating its holdings in a limited number of underlying companies—generally about 10 to 15 companies, often in the same sector. The returns for such concentrated, undiversified funds are highly susceptible to the success or failure of each underlying company and related market sector conditions. Further, hedge funds and private equity funds can also feature relatively costly fee structures compared with those of mutual funds. These fee structures can have a significant impact on net investment returns. Despite these fee structures, pension plan officials we contacted cited attaining returns superior to those attained in the stock market as a reason for investing in hedge funds and private equity. One plan official noted that as long as hedge funds add value net of fees, they found the higher fees acceptable. Hedge funds and private equity are also relatively illiquid investments— that is, investors generally cannot easily redeem their investments on demand. Hedge funds often require an initial lockup of a plan’s investment for a year or more, during which an investor cannot cash out of the hedge fund. After the initial lockup period, hedge funds offer only periodic liquidity, such as quarterly. Hedge funds impose such liquidity limits because sudden liquidations could disrupt a carefully calibrated investment strategy. Nonetheless, these constraints also pose certain disadvantages to plan sponsors, such as inhibiting a plan’s ability to limit a hedge fund’s investment loss. Private equity funds require an even longer-term commitment than hedge fund investments, and during that period, a plan may have no ability to redeem its investments—and can often require additional capital over the life of the investment. A private equity fund cycle typically follows a pattern known as the J-curve, which reflects an initial period of negative returns during which investors provide the fund with capital to invest in underlying companies and then obtain returns over time as investments mature. We reported that pension plans investing in hedge funds are also exposed to operational risk—that is, the risk of investment loss because of inadequate or failed internal processes, people, and systems, or problems with external service providers. Operational problems can arise from a number of sources, including inexperienced operations personnel; inadequate internal controls; lack of compliance standards and enforcement; errors in analyzing, trading, or recording positions; or outright fraud. While most investments can pose some type of operational risk, according to a report by an investment consulting firm, many hedge funds engage in active, complex, and sometimes heavily leveraged trading, and a failure of operational functions, such as processing or clearing one or more trades, and may have particularly grave consequences for the overall position of the hedge fund. Pension plan officials we spoke with take a number of steps in an attempt to mitigate the risks and challenges of investing in hedge funds and private equity. First, plan sponsors noted the importance of making careful and deliberate fund selection when investing in hedge funds and private equity. In the case of hedge funds, plan sponsors emphasized defining a clear purpose and strategy for their hedge fund investments. Most of the plans we contacted described one or more specific strategies for their hedge fund investments. Several sources stated that private equity investments have greater variation in performance among funds, particularly among venture capital investments compared with other asset classes such as domestic stocks, and therefore they must invest with top- performing funds in order to achieve long-term returns in excess of those of the stock market. Plan sponsors and others also cited the importance of negotiating key terms of investments in hedge funds and private equity. They said in the case of hedge funds, such terms can include fee structure and conditions, degree of transparency, valuation procedures, redemption provisions, and degree of leverage employed. For example, pension plans may want to ensure that they will not pay a performance fee unless the value of the hedge fund investment passes a previous peak value of the fund shares—known as a high-water mark. Key contract terms for private equity may also include fee structure and valuation procedures, though one plan sponsor noted the ability to negotiate favorable contract terms is limited when investing in top-performing funds, because investing in such funds is highly competitive. Due diligence and ongoing monitoring, beyond those required for traditional investments, are also important. For hedge funds, due diligence can be a wide-ranging process including study of a hedge fund’s investment, valuation, risk management processes, and compliance procedures, as well as a review of back office operations. As with hedge fund investments, plans take additional steps to mitigate the challenges of investing in private equity through extensive and ongoing monitoring, beyond that required for traditional investments. Plan representatives we interviewed said these steps include regularly reviewing reports on the performance of the underlying investments of the private equity fund and having periodic meetings with fund managers. In some cases, plans participate on the advisory board of a private equity fund, which provides a greater opportunity for oversight of the fund’s operations and new investments; however, this involves a significant time commitment and may not be feasible for every private equity investment. Also, several plan sponsors address some of the risks and challenges of investing in hedge funds and private equity by investing via a fund of funds. Investing in a fund of funds provides investors with diversification across multiple funds, which can mitigate the effect of one manager’s poor performance. In particular, a fund of private equity funds can allow plans to invest in a variety of managers, industries, geographies, and year of initial capital investment. In addition, a plan sponsor may be able to rely on a fund of funds manager to conduct negotiations, due diligence, and monitoring of the underlying hedge funds. As we reported, funds of funds can be appropriate if plan sponsors do not have the skills necessary to manage a portfolio of hedge funds. In addition, investing through a fund of funds may provide a plan better access to hedge funds or private equity funds than a plan would be able to obtain through direct investment. Nonetheless, investing in a fund of funds has some drawbacks and limitations, including an additional layer of fees—such as a 1 percent flat fee and a performance fee of 5 to 10 percent of returns—on top of the substantial fees that a fund of funds manager pays to the underlying hedge funds. Furthermore, funds of funds also pose the same challenges as hedge funds, such as limited transparency and liquidity, and the need for the plan to conduct a due diligence review of the fund of funds firm. However, investing through a fund of funds does not relieve plan sponsors of their fiduciary duties; accordingly, the plan sponsors must act prudently in selecting and monitoring funds of funds. According to plan officials, regulators, and others, some pension plans— especially smaller plans—may find it particularly difficult to address the various demands of hedge fund investing. For example, medium-size and small plans may not have the expertise to oversee the trading and investment practices of hedge funds. Some plans may also lack the ability to conduct the necessary due diligence and monitoring of hedge fund investments. Smaller plans may have only one-or two-person staffs, or may lack the resources to hire outside consulting expertise and may be locked out of top-performing funds. To a lesser extent, some larger plans may also lack sufficient expertise. A representative of one pension plan with more than $32 billion in total assets noted that before investing in hedge funds, the plan would have to build up its staff in order to conduct the due diligence necessary during the fund selection process. In light of these challenges, and as predecessors to this 2011 ERISA Advisory Council have concluded, the Department of Labor (Labor) can play a role in helping to ensure that plans fulfill their Employee Retirement Income Security Act of 1974 (ERISA) fiduciary duties when investing in hedge funds and private equity. For example, in 2006, the ERISA Advisory Council recommended that Labor publish guidance about the unique features of hedge funds and matters for consideration in their use by qualified plans. In 2008, the ERISA Advisory Council recommended that Labor publish guidance to clarify the role of ERISA fiduciaries in selecting, valuing, and accounting for hard-to-value assets, of which many hedge funds and private equity funds are composed. In addition, the Investor’s Committee formed by the President’s Working Group on Financial Markets published a report in January 2009 on the best practices for hedge fund investors. The report acknowledged that hedge fund investments are not necessarily suitable for some investors and provided many recommendations for investors selecting and monitoring their hedge fund investments—including best practices for valuation— such as obtaining a written statement of the fund’s valuation policies and procedures and ensuring the fund’s portfolio is being valued in accordance with Generally Accepted Accounting Principles (GAAP). In 2008, we recommended that Labor provide guidance for qualified plans under ERISA on the unique challenges of investing in hedge funds and private equity and the steps plans should take to address these challenges. For example, we stated that Labor’s Employee Benefits Security Administration (EBSA) could outline the implications of a hedge fund’s or fund of funds’ limited transparency on the fiduciary duty of prudent oversight. EBSA can also reflect on the implications of these best practices for some plans—especially smaller plans—that might not have the resources to take actions consistent with the best practices, and thus would be at risk of making imprudent investments in hedge funds. Finally, we noted that while EBSA is not tasked with offering guidance to public sector plans, such plans may nonetheless benefit from such guidance. Although Labor generally agreed with our recommendation, the agency explained that the lack of uniformity among these investments could complicate the development of comprehensive guidance for plan fiduciaries. To date, Labor has not acted on this recommendation. As plan sponsors seek to better ensure adequate return on assets under management, recent trends suggest that investments in alternative assets such as hedge funds and private equity are becoming more commonplace. In light of these trends and ongoing public equity market volatility, it is reasonable to expect that the number of plan sponsors making such investments will increase in the future. Our past work indicates that such assets may serve useful purposes in a well-thought- out investment program, offering plan sponsors advantages that may not be as readily available from more traditional investment options. Nonetheless, it is equally clear that investments in such assets place demands on plan sponsors that are significantly beyond the demands of more traditional asset classes. These challenges can be daunting even for large plan sponsors. Accordingly, we believe that, as we recommended in 2008, the Secretary of Labor should provide guidance regarding investing in hedge funds and private equity specifically designed for qualified plans under ERISA. In particular, we believe that a discussion of the challenges that such investments pose to small plan sponsors would be beneficial. This concludes my prepared statement. I would be happy to answer any questions that the council may have. For further questions on this statement, please contact me at (202) 512- 7215. Individuals making key contributions to this statement include Michael Hartnett, Sharon Hermes, David Lehrer, and Amber Yancey Carroll. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Millions of Americans rely on retirement savings plans for their financial well-being in retirement. Plan sponsors are increasingly investing in assets such as hedge funds (privately administered pooled investment vehicles that typically engage in active trading strategies) and private equity funds (privately managed investment pools that typically make long-term investments in private companies). Given ongoing market challenges, it is important that plan fiduciaries apply best practices, and choose wisely when investing plans assets to ensure that plans are adequately funded to meet future promised benefits. This statement addresses (1) what is known about the extent to which defined benefit plans have invested in hedge funds and private equity, (2) challenges that such plans face in investing in hedge funds and private equity, (3) steps that plan sponsors can take to address these challenges, and (4) the implications of these challenges for plan sponsors and the federal government. A growing number of private and public sector pension plans have invested in hedge funds and private equity, but such investments generally constitute a small share of total plan assets. According to a survey of large plans, the share of plans with investments in hedge funds grew from 11 percent in 2001 to 60 percent in 2010. Over the same time period, investments in private equity were more prevalent but grew more slowly--an increase from 71 percent of large plans in 2001 to 92 percent in 2010. Still, the average allocation of plan assets to hedge funds was a little over 5 percent, and the average allocation to private equity was a little over 9 percent. Available data also show that investments in hedge funds and private equity are more common among large pension plans, measured by assets under management, compared with midsize plans. Survey information on smaller plans is unavailable, so the extent to which these plans invest in hedge funds or private equity is unknown. Hedge funds and private equity investments pose a number of risks and challenges beyond those posed by traditional investments. For example, investors in hedge funds and private equity face uncertainty about the precise valuation of their investment. Hedge funds may, for example, own thinly traded assets whose valuation can be complex and subjective, making valuation difficult. Further, hedge funds and private equity funds may use considerable leverage--the use of borrowed money or other techniques--which can magnify profits, but can also magnify losses if the market goes against the fund's expectations. Also, both are illiquid investments--that is they cannot generally be redeemed on demand. Finally, investing in hedge funds can pose operational risks--that is, the risk of investment loss from inadequate or failed internal processes, people, and systems, or problems with external service providers rather than an unsuccessful investment strategy. Plan sponsors GAO spoke with address these challenges in a number of ways, such as through careful and deliberate fund selection, and negotiating key contract terms. For example, investors in both hedge funds and private equity funds may be able to negotiate fee structure and valuation procedures, and the degree of leverage employed. Also, plans address various concerns through due diligence and monitoring, such as careful review of investment, valuation, and risk management processes. The Department of Labor (Labor) has a role in helping to ensure that private plans fulfill their fiduciary duties, which includes educating employers and service providers about their fiduciary responsibilities under Employee Retirement Income Security Act of 1974 (ERISA). According to plan officials, state and federal regulators, and others, some pension plans, such as smaller plans, may have particular difficulties in addressing the various demands of hedge fund and private equity investing. In light of this, in 2008, GAO recommended that Labor provide guidance on the challenges of investing in hedge funds and private equity and the steps plans should take to address these challenges. Labor generally agreed with our recommendation, but has yet to take action. The agency explained that the lack of uniformity among these investments could complicate the development of comprehensive guidance for plan fiduciaries.
Long-term care includes many types of services needed when a person has a physical or mental disability. Individuals needing long-term care have varying degrees of difficulty in performing some activities of daily living without assistance, such as bathing, dressing, toileting, eating, and moving from one location to another. They may also have trouble with instrumental activities of daily living, which include such tasks as preparing food, housekeeping, and handling finances. They may have a mental impairment, such as Alzheimer’s disease, that necessitates assistance with tasks such as taking medications or supervision to avoid harming themselves or others. Although a chronic physical or mental disability may occur at any age, the older an individual becomes, the more likely a disability will develop or worsen. According to the 1999 National Long-Term Care Survey, approximately 7 million elderly had some sort of disability in 1999, including about 1 million needing assistance with at least five activities of daily living. Assistance takes place in many forms and settings, including institutional care in nursing homes or assisted living facilities, and home care services. Further, many disabled individuals rely exclusively on unpaid care from family members or other informal caregivers. Nationally, spending from all public and private sources for long-term care for all ages totaled about $183 billion in 2003, accounting for about 13 percent of all health care expenditures. About 69 percent of expenditures for long-term care services were paid for by public programs, primarily Medicaid and Medicare. Individuals financed about 20 percent of these expenditures out of pocket and, less often, private insurers paid for long- term care. Moreover, these expenditures did not include the extensive reliance on unpaid long-term care provided by family members and other informal caregivers. Figure 1 shows the major sources financing these expenditures. Medicaid, the joint federal-state health-financing program for low-income individuals, continues to be the largest funding source for long-term care. Medicaid provides coverage for poor persons and for many individuals who have become nearly impoverished by “spending down” their assets to cover the high costs of their long-term care. For example, many elderly persons become eligible for Medicaid as a result of depleting their assets to pay for nursing home care that Medicare does not cover. In 2003, Medicaid paid 48 percent (about $87 billion) of total long-term care expenditures. States share responsibility with the federal government for Medicaid, paying on average approximately 43 percent of total Medicaid costs in fiscal year 2002. Eligibility for Medicaid-covered long-term care services varies widely among states. Spending also varies across states— for example, in fiscal year 2000, Medicaid per capita long-term care expenditures ranged from $73 per year in Nevada to $680 per year in New York. For the national average, about 57 percent of Medicaid long-term care spending in 2002 was for the elderly. In 2003, nursing home expenditures dominated Medicaid long-term care expenditures, accounting for about 47 percent of its long-term care spending. Home care expenditures make up a growing share of Medicaid long-term care spending as many states use the flexibility available within the Medicaid program to provide long-term care services in home- and community- based settings. From 2000 through 2003, home and personal care expenditures grew at an average annual rate of 15.9 percent compared with 4.0 percent for nursing facility spending. Expenditures for Medicaid home- and community-based services for long-term care almost doubled from 1998 to 2003—from about $10 billion to about $19 billion. Other significant long-term care financing sources include: Individuals’ out-of-pocket payments, the second largest source of long- term care expenditures, accounted for 20 percent (about $38 billion) of total expenditures in 2003. The vast majority (82 percent) of these payments were used for nursing home care. Medicare spending accounted for 18 percent (about $33 billion) of total long-term care expenditures in 2003. While Medicare primarily covers acute care, it also pays for limited stays in post-acute skilled nursing care facilities and home health care. Private insurance, which includes both traditional health insurance and long-term care insurance, accounted for 9 percent (about $16 billion) of long-term care expenditures in 2003. Before focusing on the increased burden that long-term care will place on federal and state budgets, it is important to look at the broader budgetary context. As we look ahead we face an unprecedented demographic challenge with the aging of the baby boom generation. As the share of the population 65 and over climbs, federal spending on the elderly will absorb a larger and ultimately unsustainable share of the federal budget and economic resources. Federal spending for Medicaid, Medicare, and Social Security is expected to surge—nearly doubling by 2035—as people live longer and spend more time in retirement. In addition, advances in medical technology are likely to keep pushing up the cost of health care. Moreover, the baby boomers will be followed by relatively fewer workers to support them in retirement, prompting a relatively smaller employment base from which to finance these higher costs. Based on CBO’s long-term Medicaid estimates, the federal share of Medicaid as a percent of GDP will grow from today’s 1.5 percent to 2.6 percent in 2035 and reach 4.8 percent in 2080. Under the 2005 Medicare trustees’ intermediate estimates, Medicare will almost triple as a share of gross domestic product (GDP) between now and 2035 (from 2.7 percent to 7.5 percent) and reach 13.8 percent of GDP in 2080. Under the Social Security trustees’ intermediate estimates, Social Security spending will grow as a share of GDP from 4.3 percent today to 6.3 percent in 2035, reaching 6.4 percent in 2080. (See fig. 2.) Combined, in 2080 almost one-quarter of GDP will be devoted to federal spending for these three programs alone. To move into the future with no changes in federal health and retirement programs is to envision a very different role for the federal government. Our long-term budget simulations serve to illustrate the increasing constraints on federal budgetary flexibility that will be driven by entitlement spending growth. Assume, for example, that all expiring tax provisions are extended, revenue remains constant thereafter as a share of GDP, and discretionary spending keeps pace with the economy. Under these conditions, by 2040 federal revenues may be adequate to pay little more than interest on the federal debt. (See fig. 3.) Beginning about 2010, the share of the population that is age 65 or older will begin to climb, with profound implications for our society, our economy, and the financial condition of these entitlement programs. In particular, both Social Security and the Hospital Insurance portion of Medicare are largely financed as pay-as-you-go systems in which current workers’ payroll taxes pay current retirees’ benefits. Therefore, these programs are directly affected by the relative size of populations of covered workers and beneficiaries. Historically, this relationship has been favorable. In the near future, however, the overall worker-to-retiree ratio will change in ways that threaten the financial solvency and sustainability of these entitlement programs. In 2000, there were 4.8 working-age persons (20 to 64 years) per elderly person, but by 2030, this ratio is projected to decline to 2.9. This decline in the overall worker-to-retiree ratio will be due to both the surge in retirees brought about by the aging baby boom generation as well as falling fertility rates, which translate into relatively fewer workers in the near future. Social Security’s projected cost increases are due predominantly to the burgeoning retiree population. Even with the increase in the Social Security eligibility age to 67, these entitlement costs are anticipated to increase dramatically in the coming decades as a larger share of the population becomes eligible for Social Security, and if, as expected, average longevity increases. As the baby boom generation retires and the Medicare-eligible population swells, the imbalance between outlays and revenues will increase dramatically. Medicare growth rates reflect not only a rapidly increasing beneficiary population, but also the escalation of health care costs at rates well exceeding general rates of inflation. While advances in science and technology have greatly expanded the capabilities of medical science, disproportionate increases in the use of health services have been fueled by the lack of effective means to channel patients into consuming, and providers into offering, only appropriate services. In fiscal year 2004, Medicare spending grew by 8.5 percent and is up 9.9 percent for the first 6 months of fiscal year 2005. The implementation of the Medicare outpatient drug benefit in January 2006 will further increase Medicare spending in future years. To obtain a more complete picture of the future health care entitlement burden, especially as it relates to long-term care, we must also acknowledge and discuss the important role of Medicaid. In 2003, approximately 69 percent of all Medicaid dollars was dedicated to services for the elderly and people with disabilities. Medicaid is the second largest and fastest growing item in overall state spending. At the February 2005 National Governors Association meeting, governors reported that states are faced with proposing cuts in their Medicaid programs. Over the longer term, the increase in the number of elderly will add considerably to the strain on federal and state budgets as governments struggle to finance increased Medicaid spending. In addition, this strain on state Medicaid budgets may be exacerbated by fluctuations in the business cycle. State revenues decline during economic downturns, while the needs of the disabled for assistance remain constant. In coming decades, the sheer number of aging baby boomers will swell the number of elderly with disabilities and the need for services. These overwhelming numbers offset the slight reductions in the prevalence of disability among the elderly reported in recent years. In 2000, individuals aged 65 or older numbered 35.1 million people—12.4 percent of our nation’s total population. By 2020, that percentage will increase by nearly one-third to 16.3 percent—one in six Americans—and will represent nearly 20 million more elderly than there were in 2000. By 2040, the number of elderly aged 85 years and older—the age group most likely to need long-term care services—is projected to increase more than 250 percent from 4.3 million in 2000 to 15.4 million (see fig. 4). It is difficult to precisely predict the future increase in the number of the elderly with disabilities, given the counterbalancing trends of an increase in the total number of elderly and a possible continued decrease in the prevalence of disability. The number of elderly with disabilities remained fairly constant from 1982 through 1999 while the percentage of those with disabilities fell between 1 and 2 percent a year from 1984 through 1999. Possible factors contributing to this decreased prevalence of disability include improved health care, improved socioeconomic status, and better health behaviors. The positive benefits of the decreased prevalence of disability, however, will be overwhelmed by the sheer numbers of aged baby boomers. The total number of disabled elderly is projected to increase, with estimates varying from an increase of one-third to twice the current level, or as high as 12.1 million by 2040. The increased number of disabled elderly will exacerbate current problems in the provision and financing of long-term care services. For example, in 2000 it was reported that approximately one in five adults with long-term care needs and living in the community reported an inability to receive needed care, such as assistance in toileting or eating, often with adverse consequences. In addition, disabled elderly may lack family support or the financial means to purchase medical services. Long-term care costs can be financially catastrophic for families. Services, such as nursing home care, are very expensive; while costs can vary widely, a year in a nursing home typically costs more than $50,000, and in some locations can be considerably more. Because of financial constraints, many elderly rely heavily on unpaid caregivers, usually family members and friends; overall, the majority of care received in the community is unpaid. However, in coming decades, fewer elderly may have the option of unpaid care because a smaller proportion may have a spouse, adult child, or sibling to provide it. By 2020, the number of elderly who will be living alone with no living children or siblings is estimated to reach 1.2 million, almost twice the number without family support in 1990. In addition, geographic dispersion of families may further reduce the number of unpaid caregivers available to elderly baby boomers. Public and private spending on long-term care was about $183 billion for persons of all ages in 2003. CBO projected in 1999 that long-term care spending for the elderly could increase by more than two-and-a-half times from 2000 to 2040. A 2001 study projected that these expenditures could quadruple from 2000 through 2050, reaching $379 billion in 2050. (See fig. 5.) Estimates of future spending are imprecise, however, due to the uncertain effect of several important factors, including how many elderly will need assistance, the types of care they will use, and the availability of public and private sources of payment for care. Absent significant changes in the availability of public and private payment sources, however, future spending is expected to continue to rely heavily on public payers, particularly Medicaid, which estimates indicate paid about 35 percent of long-term care expenditures for the elderly in 2004. One factor that will affect spending is how many elderly will need assistance. As noted earlier, even with continued decreases in the prevalence of disability, aging baby boomers are expected to have a disproportionate effect on the demand for long-term care. Another factor influencing projected long-term care spending is the type of care that the baby boom generation will use. Per capita expenditures for nursing home care greatly exceed those for care provided in other settings. Since the 1990s, there have been increases in the use of paid home care as well as in assisted living facilities, a relatively newer and developing type of housing. It is unclear what effect continued growth in paid home care, assisted living facilities, or other care alternatives may have on future expenditures. Any increase in the availability of home care may reduce the average cost per disabled person, but the effect could be offset if there is an increase in the use of paid home care by persons currently not receiving these services. Changes in the availability of public and private sources to pay for care will also affect expenditures. Private long-term care insurance has been viewed as a possible means of reducing catastrophic financial risk for the elderly needing long-term care and relieving some of the financial burden currently falling on public long-term care programs. Increases in private insurance may lower public expenditures but raise spending overall because insurance increases individuals’ financial resources when they become disabled and allows the purchase of additional services. The number of policies in force remains relatively small despite improvements in policy offerings and the tax deductibility of premiums. However, as we have previously testified, questions about the affordability of long-term care policies and the value of the coverage relative to the premiums charged have posed barriers to more widespread purchase of these policies. Further, many baby boomers continue to assume they will never need such coverage or mistakenly believe that Medicare or their own private health insurance will provide comprehensive coverage for the services they need. If private long-term care insurance is expected to play a larger role in financing future generations’ long-term care needs, consumers need to be better informed about the costs of long-term care, the likelihood that they may need these services, and the limits of coverage through public programs and private health insurance. With or without increases in the availability of private insurance, Medicaid and Medicare are expected to continue to pay for the majority of long-term care services for the elderly in the future. Without fundamental financing changes, Medicaid can be expected to remain one of the largest funding sources for long-term care services for aging baby boomers, with Medicaid expenditures for long-term care for the elderly reaching as high as $132 billion by 2050. As noted earlier, this increasing burden will strain both federal and state governments. Given the anticipated increase in demand for long-term care services resulting from the aging of the baby boom generation, the concerns about the availability of services, and the expected further stress on federal and state budgets and individuals’ financial resources, some policymakers and advocates have called for long-term care financing reforms. Indeed, we identified options for rethinking the federal, state, and private insurance roles in financing long-term care as one of the key questions that our nation needs to face as it addresses 21st century challenges. The Comptroller General previously testified in 2002 on several considerations for policymakers to keep in mind when considering reforms for long-term care financing, and these considerations remain relevant today. At the outset, it is important to recognize that long-term care services are not just another set of traditional health care services. Meeting acute and chronic health care needs is an important element of caring for aging and disabled individuals. Long-term care, however, encompasses services related to maintaining quality of life, preserving individual dignity, and satisfying preferences in lifestyle for someone with a disability severe enough to require the assistance of others in everyday activities. Some long-term care services are akin to other health care services, such as personal assistance with activities of daily living or monitoring or supervision to cope with the effect of dementia. Other aspects of long-term care, such as housing, nutrition, and transportation are services that all of us consume daily but become an integral part of long-term care for a person with a disability. Disabilities can affect housing needs, nutritional needs, or transportation needs. But, what is more important is that where one wants to live or what activities one wants to pursue also affects how needed services can be provided. Providing personal assistance in a congregate setting such as a nursing home or assisted living facility may satisfy more of an individual’s needs, be more efficient, and involve more direct supervision to ensure better quality than when caregivers travel to individuals’ homes to serve them one on one. Yet, those options may conflict with a person’s preference to live at home and maintain autonomy in determining his or her daily activities. Keeping in mind that policies need to take account of the differences involved in long-term care, there are several issues that policymakers may wish to consider as they address long-term care financing reforms. These include: Determining societal responsibilities. A fundamental question is how much the choices of how long-term care needs are met should depend upon an individual’s own resources or whether society should supplement those resources to broaden the range of choices. For a person without a disability requiring long-term care, where to live and what activities to pursue are lifestyle choices based on individual preferences and resources. However, for someone with a disability, those lifestyle choices affect the costs of long-term care services. The individual’s own resources—including financial resources and the availability of family or other informal supports—may not be sufficient to preserve some of their choices and also obtain needed long-term care services. Societal responsibilities may include maintaining a safety net to meet individual needs for assistance. However, the safety net may not provide a full range of choices in how those needs are met. Persons who require assistance multiple times a day and lack family members to provide some share of this assistance may not be able to have their needs met in their own homes. The costs of meeting such extensive needs may mean that sufficient public support is available only in settings such as assisted living facilities or nursing homes. More extensive public support may be extended, but decisions to do so should carefully consider affordability in the context of competing demands for our nation’s resources. Considering the potential role of social insurance in financing. Government’s role in many situations has extended beyond providing a safety net. Sometimes this extended government role has been a result of efficiencies in having government undertake a function, or in other cases this role has been a policy choice. Some proposals have recommended either voluntary or mandatory social insurance to provide long-term care assistance to broad groups of beneficiaries. In evaluating such proposals, careful attention needs to be paid to the limitations and conditions under which services will be provided. In addition, who will be eligible and how such a program will be financed are critical choices. As in establishing a safety net, it is imperative that any option under consideration be thoroughly assessed for its affordability over the longer term. Encouraging personal preparedness. Becoming disabled is a risk. Not everyone will experience disability during his or her lifetime and even fewer persons will experience a severe disability requiring extensive assistance. This is the classic situation in which having insurance to provide additional resources to deal with a possible disability may be better than relying on personally saving for an event that may never occur. Insurance allows both persons who eventually will become disabled and those who will not to use more of their economic resources during their lifetime and to avoid having to put those resources aside for the possibility that they may become disabled. The public sector has at least two important potential roles in encouraging personal preparedness. One is to adequately educate people about the current divisions between personal and societal responsibilities. Only if the limits of public support are clear will individuals be likely to take steps to prepare for a possible disability. Currently, one of the factors contributing to the lack of preparation for long-term care among the elderly is a widespread misunderstanding about what services Medicare will cover. Another public sector role may be to assure the availability of sound private long-term care insurance policies and possibly to create incentives for their purchase. Progress has been made in improving the value of insurance policies through state insurance regulation and through strengthening the requirements for policies qualifying for favorable tax treatment enacted by the Health Insurance Portability and Accountability Act of 1996. Furthermore, since 2002 the federal government has offered long-term care insurance to federal employees, military personnel, retirees, and their families, providing the largest offering of long-term care insurance. While the federal government’s program is still very new, other employers and policymakers will likely be carefully watching the federal government’s experience in offering long-term care insurance. Long-term care insurance remains an evolving product, and given the flux in how long-term care services are delivered, it is important to monitor whether long-term care insurance regulations need adjustments to ensure that consumers receive fair value for their premium dollars. Recognizing the benefits, burdens, and costs of informal caregiving. Family and other informal caregivers play a critical role in supplying the bulk of long-term care to disabled persons. Effective policy must create incentives and supports for enabling informal caregivers to continue providing assistance. Further, care should be taken to avoid creating incentives that result in informal care being inappropriately supplanted by formal paid services. At the same time, it is important to recognize the physical, emotional, and social burdens that providing care impose on the caregiver and its economic costs to the caregiver and to society. Caregiving may create needs in caregivers themselves that require respite or other relief services. In addition, caregiving can conflict with caregivers’ employment, creating economic losses for caregivers and society. Such losses in productivity will become even more important in the coming decades as the proportion of the population that is working-age declines. Assessing the balance of federal and state responsibilities to ensure adequate and equitable satisfaction of needs. Reforms in long-term care financing may require reevaluating the traditional federal and state financing roles to better ensure an equitable distribution of public support for individuals with disabilities. The variation across states in Medicaid spending per capita on long-term care is in part reflective of differences among states in generosity of services as well as their fiscal capacity. Given these differences, having states assume primary responsibility for financing long-term care subjects individuals to different levels of support depending on where they live. In addition, because state revenues are sensitive to the business cycle and states generally must have balanced budgets, their services become vulnerable during economic downturns. Adopting effective and efficient implementation and administration of reforms. Proposed reforms to better meet the increasing demand for long-term care within budget constraints will be successful only if they are administratively feasible, effectively reach targeted populations and unmet needs, and efficiently provide needed services at minimum cost while complementing already available services and financing sources. Developing financially sustainable public commitments. Finally, as noted earlier, absent reform, existing federal entitlement commitments for Medicaid, Medicare, and Social Security will represent an increasing and potentially unsustainable share of the economy. States, too, are concerned about their budgetary commitments for long-term care through their share of the Medicaid program. Before committing to any additional public role in financing long-term care, it is imperative to provide reasonable assurance that revenues will be available to fund its future costs. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For future contacts regarding this testimony, please call Kathryn G. Allen at (202) 512-7118. Other individuals who made key contributions include John Dicken, Linda F. Baker, Laura Sutton Elsberg, James R. McTigue, and Joseph Petko. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Long-term care relies heavily on financing by public payers, especially Medicaid, and has significant implications for state budgets as well as the federal budget. It includes an array of health, personal care, and supportive services provided to persons with physical or mental disabilities. As the baby boom generation ages, the number of elderly with disabilities will greatly expand the demand for long-term care services and will impose greater burdens on federal and state budgets. GAO was asked to discuss the budgetary and other challenges resulting from the anticipated increase in demand for long-term care services. This testimony addresses (1) the pressure that entitlement spending for Medicare, Medicaid, and Social Security is expected to exert on the federal budget in coming decades; (2) how the aging of the baby boom population will increase the demand for long-term care services; and (3) how these trends will affect the current and future financing of long-term care services, particularly in federal and state budgets. The testimony also highlights several considerations for any possible reforms of long-term care financing. This testimony updates prior GAO work, particularly Long-Term Care: Aging Baby Boom Generation Will Increase Demand and Burden on Federal and State Budgets, GAO-02-544T (Washington, D.C.: March 21, 2002). Over the coming decades, entitlement spending for Medicare, Medicaid, and Social Security is expected to absorb larger shares of federal revenue and threatens to crowd out other spending as the baby boom generation enters retirement age. The increasing demand for long-term care services fueled in part by the baby boom generation will also further strain federal and state budgets. Estimates suggest the future number of disabled elderly who cannot perform basic activities of daily living without assistance may as much as double from 2000 through 2040, resulting in a large increase in demand for long-term care services. Spending on long-term care services just for the elderly is estimated to increase by more than two-and-a-half times between 2000 and 2040, and could nearly quadruple in constant dollars between 2000 and 2050 to $379 billion, according to some estimates. Without fundamental financing changes, Medicaid can be expected to remain one of the largest funding sources, straining both federal and state governments. Financing the increasing demand for long-term care services will be a significant 21st century challenge for the nation. A key question for policymakers will be to consider what options exist for rethinking the federal, state, and private roles in financing long-term care. In considering options for reforming long-term care financing, GAO notes that long-term care is not just about health care. It also comprises a variety of services an aged or disabled person requires to maintain quality of life--including housing, transportation, nutrition, and social support to help maintain independent living. Given the challenges in providing and paying for these myriad and growing needs, GAO has identified several considerations for shaping reform proposals that include: determining societal responsibilities; considering the potential role of social insurance in financing; encouraging personal preparedness; recognizing the benefits, burdens, and costs of informal caregiving; assessing the balance of state and federal responsibilities to ensure adequate and equitable satisfaction of needs; adopting effective and efficient implementation and administration of reforms; and developing financially sustainable public commitments.
The current and prior administrations have expressed concerns that poor labor standards in FTA partner countries may affect workers in the United States and other parts of the world, incentivizing a global “race to the bottom” that unfairly distorts global markets and prevents U.S. businesses and workers from competing on a level playing field. According to DOL, to address such concerns, each FTA signed in the past decade, including those we selected, contains a “labor chapter” that differs in detail across the FTAs but generally includes labor provisions, establishes points of contact for labor matters, and provides a recourse mechanism for matters arising from the labor provisions. The provisions in the labor chapters of the four FTAs that took effect most recently generally reflect the trade policy template created by the May 10th Agreement. In FTAs that entered into force from January 2004 through January 2009, including CAFTA-DR and the Oman FTA, the labor chapter contains a provision that a party shall not fail to effectively enforce its labor laws, through a sustained or recurring course of action or inaction, in a manner affecting trade between the parties. Under CAFTA-DR and the Oman FTA, matters related to this obligation are the only matters under the respective labor chapters for which parties can seek recourse through dispute settlement that may result in possible fines and sanctions. These FTAs also contain a provision whereby parties commit to “strive to ensure” that the labor rights enumerated in the respective labor chapter are protected by their laws; however, matters arising under this provision do not have recourse through the dispute settlement chapter of the respective FTA. In FTAs that entered into force after January 2009, including the Colombia and Peru FTAs, the labor chapter includes language echoing the May 10th Agreement that obligates each partner to adopt and maintain in its statutes, regulations, and practices certain fundamental labor rights as stated by the ILO. Although the text of the respective FTAs’ labor chapters varies, this language generally relates to, for example, the rights to freedom of association and collective bargaining and the elimination of compulsory or forced labor. The labor chapters of these FTAs also obligate the parties not to fail to effectively enforce these labor laws in a manner affecting trade between the parties. Pursuant to the labor chapters of these FTAs, if consultations fail, the parties can seek to resolve matters arising under the labor chapters by pursuing recourse through the respective FTAs’ dispute settlement chapters, which may result in possible fines and sanctions. Working Group of the Vice Ministers Responsible for Trade and Labor in the Countries of Central America and the Dominican Republic, The Labor Dimension in Central America and the Dominican Republic: Building on Progress: Strengthening Compliance and Enhancing Capacity (April 2005), accessed June 7, 2013, http://www.ilo.org/sanjose /lang—es/index.htm. Dominican Republic-Central America-United States Free Trade Agreement Implementation Act, Pub. L. No. 109-53, 119 Stat. 262 (Aug. 2, 2005). Paper detailed six areas of focus and included recommendations to enhance the implementation and enforcement of labor standards and to strengthen the region’s labor institutions. According to DOL, the U.S. government did not participate in preparing or negotiating the White Paper’s recommendations. The ILO Verification Project, funded by DOL, was created to monitor implementation of the White Paper commitments and released verification reports every 6 months between 2007 and 2010. Labor Action Plan. Colombia and the United States agreed in 2011 to the Labor Action Plan, in furtherance of Colombia’s commitment to protect internationally recognized labor rights, prevent violence against labor leaders, and prosecute the perpetrators of such violence. The plan listed nine issue areas to strengthen labor rights that Colombia was required to address before the FTA could receive congressional approval. USTR and DOL are jointly responsible for monitoring Colombia’s ongoing progress in fulfilling these requirements. The nine areas that Colombia agreed to address under the Labor Action Plan were (1) creation of a specialized Ministry of Labor; (2) criminal code reform; (3) prohibiting the misuse of cooperatives; (4) preventing the use of temporary service agencies to circumvent labor rights; (5) criminalizing the use of collective pacts to undermine the right to organize and bargain collectively; (6) collecting and disseminating information on the definition of essential services; (7) seeking the ILO’s assistance in implementing the Labor Action Plan and working with the ILO to strengthen its presence, capacity, and role in Colombia; (8) reforming protection programs; and (9) criminal justice reforms. developing and coordinating U.S. trade policy and issuing policy guidance related to international trade functions. USTR is responsible to the President and Congress for administering the trade agreements program, including periodic reporting to Congress as required. USTR is also responsible for coordinating the administration’s activities to create a fair, open, and predictable trading environment by identifying, monitoring, enforcing, and resolving the full range of international trade issues. According to USTR, this includes asserting U.S. rights; vigorously monitoring and enforcing bilateral and other agreements; and promoting U.S. interests, including labor interests, under FTAs. DOL. DOL’s Bureau of International Labor Affairs is responsible for monitoring implementation of FTA labor provisions for all FTAs. The bureau’s Office of Trade and Labor Affairs is designated as the point of contact for implementation of the labor provisions of the FTAs as well as for the labor cooperation mechanisms. Before congressional approval and implementation of an FTA, DOL’s responsibilities include preparing reports for Congress, in consultation with USTR and State, about the partner country’s labor rights protections and child labor laws and the FTA’s potential effect on employment in the United States. After an FTA enters into force, DOL’s responsibilities include receiving, reviewing, and acting on any public complaint submitted about the partner’s compliance with FTA labor obligations (submissions). Both before and after an FTA is implemented, DOL is responsible for assisting the partner country as needed—for example, planning, developing, and pursuing cooperative projects related to labor matters—to strengthen the partner country’s capacity to promote respect for core labor standards. In addition, DOL is responsible for convening the FTA labor affairs councils among partner government’s labor ministries, which oversee the FTA labor chapters, and for administering the U.S. Labor Advisory Committee for Trade Negotiations and Trade Policy. The committee consists of representatives of labor organizations and is tasked with providing information and advice with respect to negotiation and implementation of U.S. trade agreements. DOL also administers the National Advisory Committee for Labor Provisions of U.S. Free Trade Agreements, which consists of 12 representatives (4 from the business community, 4 from the labor community, and 4 from the public sector). State. State is responsible for supporting USTR and DOL in implementing and monitoring FTAs. State’s Bureau of Democracy, Human Rights, and Labor (DRL) coordinates State’s in-country labor officers, who are tasked with carrying out regular monitoring and reporting and day-to-day interaction with foreign governments regarding labor matters. State annually produces Country Reports on Human Rights Practices, which includes information about countries’ labor practices. In addition, State participates with USTR and DOL in the USTR-led interagency team that negotiates FTA labor provisions, contributes critical input to the research and analysis of reports produced by DOL, and provides technical assistance funding to strengthen some countries’ labor capacity. Because USTR and DOL do not maintain a presence in other countries, they often rely on State for outreach, monitoring, and reporting activities related to FTA labor provisions. USAID. USAID administers technical assistance programs to address labor-related matters. USAID administers trade-capacity-building programs globally in both FTA and non-FTA partner countries. In addition, during the FTA negotiations, USAID provides USTR input on draft FTA text as well as input on possible trade-capacity-building programs to address labor-related issues. Each of the FTA partner countries that we selected for our review has taken steps to strengthen labor rights pursuant to its FTA with the United States, and some of these countries have also implemented other labor initiatives outside the FTA framework. The U.S. government has provided some technical assistance to help FTA partner countries meet their labor commitments. However, stakeholders reported that limitations in partner countries’ capacity to enforce labor laws cause gaps in labor protections to persist. El Salvador and Guatemala, the CAFTA-DR countries we selected for our review, have both taken steps to implement labor initiatives responding to their FTA commitments and areas of focus identified in the White Paper. According to the ILO, and as reported by DOL, these countries have addressed these areas of focus by implementing changes to improve labor protections, such as increasing the number of labor inspectors and increasing the number of judges and courts that hear labor cases. El Salvador. According to DOL and ILO verification reports, El Salvador increased its Ministry of Labor’s enforcement budget by about 120 percent from 2005 to 2010, leading to an increase in the number of labor inspectors during the same period as well as increases in both the number of inspections conducted and the number of fines imposed on employers. Ministry of Labor officials reported that with the increase in size and budget, which resulted from the White Paper recommendations, the ministry is now able to accommodate workers’ requests for workplace inspections and labor inspectors can issue fines for violations not addressed by employers.Supreme Court in El Salvador, the labor courts have created a comprehensive statistical system to track labor issues identified in the White Paper, and unions have successfully advocated for legislation that, if passed, would speed labor case reviews and allow plaintiffs to participate more actively. In addition, according to officials from the Guatemala. Guatemala has taken some steps to address labor conditions in accordance with commitments outlined in an Enforcement Plan that the United States and Guatemala agreed to in 2013 as a result of negotiations to resolve a labor case initiated through a labor submission to DOL. The actions that the plan calls for include, among others, increasing the budget for labor law enforcement at the Ministry of Labor and verifying employer compliance with court orders.from the Ministry of Labor reported improvements in labor rights implementation in response to the Enforcement Plan. For example, the number of labor inspections rose from about 5,000 nationwide in 2011 to about 36,800 nationwide in 2013. Also, according to ministry officials, the ministry now conducts labor inspections regularly, rather than in response to complaints, and has increased its legal education requirements for labor inspectors, to further fulfill its Enforcement Plan commitments. According to USTR, Colombia has taken steps to implement labor protection commitments outlined in the Labor Action Plan, such as reforming the criminal code to establish criminal penalties for employers that undermine the right to organize and bargain collectively, enacting legal provisions and regulations prohibiting the use of temporary service agencies to circumvent labor rights, and reforming the criminal justice system. USTR and DOL have reported that the Colombian government took concrete steps and made meaningful progress under the Labor Action Plan, which fulfilled the condition for advancing the FTA to Congress and resulted in the FTA’s entering into force in May 2012. The government’s steps included securing legislation to establish a separate labor ministry and expanding its labor inspectorate by hiring additional inspectors. Additionally, USTR and the Colombian Ministry of Labor reported that the government enacted a series of laws and ministerial decrees that expanded labor protections as a result of the Labor Action Plan. According to USTR and the Colombian Ministry of Labor, these laws and decrees include, for example, legislation to establish criminal penalties, including imprisonment, for employers that undermined the right to organize and bargain collectively as well as new provisions and regulations to prohibit and sanction with significant fines the misuse of cooperatives and other employment relationships that undermine workers’ rights. Oman has taken steps to implement labor protections that have allowed for unionization and collective bargaining. Officials from Oman’s Ministry of Manpower—the ministry responsible for labor affairs—reported that Oman’s interest in entering into a free trade agreement with the United States helped lead to the introduction of labor reforms, including the establishment of unions. According to USTR, in order to meet its commitments made in connection with the FTA, Oman has enacted a number of labor law reforms including, among others, a royal decree in 2006 that established the right to organize labor unions, allowed for collective bargaining, prohibited the dismissal of workers for union activity, guaranteed the right to strike, and guaranteed unions the right to practice their activities freely and without interference from outside parties. According to union and Ministry of Manpower officials we met with in Oman, the General Federation of Trade Unions held its first election in 2010 and served as the starting point for the union movement in Oman. The federation serves as an umbrella organization representing workers from various sectors and, according to union officials, represents about 200 company-level unions and one sector-level union, established in 2013 in the oil and gas sector. According to USTR, Peru committed to steps to implement labor protection commitments in the context of FTA negotiations with the United States. During our fieldwork, officials at Peru’s Ministry of Labor told us that Peru recently established a new labor inspection regime and that the ministry has focused on improving inspections in the workplace. For example, in 2013, according to Ministry of Labor officials, the ministry took action to centralize authority for labor inspections and help ensure that inspectors are applying the same criteria across the country. Ministry of Labor officials also reported that the ministry took steps to improve labor inspections by modernizing its information systems to allow for digital record keeping, with technical assistance provided by USAID. According to USAID, as a result of these programs, the time required to adjudicate labor cases has decreased from 2 years to 6 months. Additionally, according to a 2007 U.S. House of Representatives, Committee on Ways and Means report, in order to bring Peruvian labor laws into alignment with the obligations under the FTA, the government of Peru took steps to change Peru’s legal framework governing temporary employment contracts, subcontracting and outsourcing contracts, the right of workers to strike, recourses against unit-union discrimination, and workers’ right to organize. Ministry of Labor officials reported that the ministry has progressively increased the number of labor inspectors, in connection with the commitments Peru made during the FTA negotiation, to double its labor inspectorate. As of September 2013, the ministry reported having about 400 labor inspectors on staff nationally. From fiscal year 2001 through fiscal year 2013, the U.S. government provided a combined total of about $275 million in labor-related assistance for all FTA partner countries. All CAFTA-DR countries, Colombia, and Peru received a combined total of about $222 million in labor-related technical assistance and capacity-building activities since the passage of implementing legislation for these FTAs. In contrast, the U.S. government provided about $53 million in labor-related assistance for all other FTA partner countries during the periods since those FTAs were implemented. Figure 1 shows the labor-related technical assistance that U.S. agencies provided under CAFTA-DR and the Colombia, Oman, and Peru FTAs during the periods beginning, respectively, with the year that Congress passed the FTA’s implementing legislation and ending in 2013. Figure 1 also shows labor-related technical assistance that U.S. agencies provided from 2001 through 2013 under all other FTAs that entered into force in or after 2001. CAFTA-DR’s six partner countries have received the largest amounts of U.S. assistance for labor-related projects undertaken pursuant to the FTA or independent of the FTA. From 2005, when the CAFTA-DR implementing legislation was passed by Congress, through 2013, the U.S. government provided about $170 million for such projects. According to DOL, this amount included funding appropriated by Congress to fund labor-capacity-building activities as well as funds appropriated to DOL for child labor technical assistance projects to assist these partner countries in addressing labor-related priorities outlined in the White Paper. According to DOL officials, DOL, State, and USAID established an interagency group to develop FTA labor-related projects in consultation with USTR and CAFTA-DR partner governments and to allocate funding among these projects, with funds transferred from State. DOL reported that from 2005 to 2013, these three U.S. agencies administered more than 20 technical assistance projects in support of the White Paper’s priority issue areas. U.S. technical assistance for labor-related projects in Colombia totaled about $24 million from 2011, when the Colombia FTA implementing legislation was enacted, through 2013. U.S. agencies provided $9 million of that amount for projects to combat child labor and about $13 million to address workers’ rights. DOL has also provided in-kind resources, sending a staff person with labor expertise to support the Colombian government in taking initial steps to implement the Labor Action Plan. The United States is currently funding multiple labor-related projects in Colombia, including State’s award of about $500,000 to the ILO for the promotion of core labor rights and DOL’s award of about $7.8 million for the ILO office in Colombia. Oman has not received U.S. technical assistance specifically for labor- related projects since the FTA was enacted. According to State, the United States is not involved in any labor-capacity-building or labor- related assistance programs in Oman because of the Omani government’s reluctance to accept foreign assistance. However, officials at Oman’s Ministry of Manpower told us that the United States has provided information and advice on supporting unions and the role of unions in the economy and has expressed support of ongoing labor reforms, including the establishment of unions. U.S technical assistance for labor-related projects in Peru totaled about $27.5 million from 2007, when the Peru FTA implementing legislation was enacted, through 2013. Of that amount, $13 million was dedicated to combat child labor, with the remainder dedicated to labor-capacity- building and education projects. USAID officials stated that, for example, the agency expended $3.3 million over a 3-year period to target labor issues. Of this amount, $2.7 million was granted to the Solidarity Center, a labor nongovernmental organization (NGO) affiliated with the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), to strengthen union negotiation capacity, and $600,000 was granted to an implementing partner—Nathan Associates—for improving information systems, providing training for labor inspectors, and training judges on the implementation of labor laws. El Salvador. Stakeholders we met with during our fieldwork in El Salvador identified concerns related to the enforcement of labor rights. Further, State and DOL reports show that workers are often unable to benefit from the legal rights afforded in the labor laws. For example, according to NGO officials, although labor courts have improved their ability to process cases, court decisions are often not enforced. U.S. officials stated that the Ministry of Labor’s increases in its budget and number of labor inspectors have not improved the labor inspectorate’s effectiveness. According to Ministry of Labor officials, although the ministry’s labor inspectors can fine employers for labor law violations, the ministry does not collect the fines and workers must petition the labor courts to enforce the penalties. Moreover, according to an ILO official, although the government of El Salvador has greatly reduced the amount of time that the courts take to accept a case, resolution of most labor disputes still takes 2 to 4 years. Officials from the Supreme Court in El Salvador told us that about 51 percent of labor court sentences are not enforced, primarily because the plaintiffs do not have the funds required to continue the claims. Union and NGO officials we met with in El Salvador emphasized their concerns over enforcement, stating that because of the length of time the courts take to adjudicate labor cases, workers often take buyouts from the company and drop the cases. State’s 2013 Human Rights Report echoes these concerns. For example, the report states that in 2013, the government of El Salvador did not always effectively enforce the laws on freedom of association and the right to collective bargaining, and that legal remedies and penalties are ineffective. Guatemala. According to stakeholders we met with during our fieldwork in Guatemala, as well as State reports, concerns related to the enforcement and application of labor rights persist. According to USTR, Guatemala has taken steps to address labor reforms outlined in the Enforcement Plan, but additional steps are needed, including passing legislation providing for an expedited process to sanction employers that violate labor laws and implementing a mechanism to ensure payments to workers in cases where enterprises have closed. Additionally, according to USTR, Guatemala will need to demonstrate that the legal reforms it has undertaken and still needs to undertake are being implemented effectively and are leading to positive changes.reported concerns related to freedom of association—specifically, that union leaders have been offered monetary compensation to resign from their jobs and to influence other workers against joining the union. Union officials we met with also noted that workers have been terminated for their union affiliation or for not disbanding unions. State’s 2013 Human Rights Report echoed these concerns, stating that the government of Guatemala did not effectively enforce legislation on freedom of association, collective bargaining, or antiunion discrimination. Further, according to this report, as a result of inadequate allocation of budget resources and inefficient legal and administrative processes, the relevant government institutions did not effectively investigate, prosecute, and punish employers who violated freedom of association and collective Union representatives bargaining laws, and the institutions did not reinstate workers illegally dismissed for engaging in union activities. Stakeholders we met with in Colombia identified concerns related to the enforcement of labor rights and not benefiting from rights afforded in the labor laws. According to USTR, the government of Colombia has made meaningful progress under the Labor Action Plan, but work remains to build on this progress and address remaining and new challenges. According to USTR, the collection of fines for labor violations remains problematic. USTR and DOL have reported that although Colombia’s national training and apprenticeship system, Servicio Nacional de Aprendizaje (SENA), is responsible for collecting fines for labor violations, until recently SENA has been barred from collecting fines from companies that filed a judicial appeal. According to a joint USTR-DOL statement, as of April 2014, SENA was authorized to hold monetary payment as collateral payment from businesses, pending the outcome of the judicial appeal of their fines, but had not yet begun to exercise this authority. Additionally, although the Ministry of Labor increased the number of inspectors, labor unions and NGOs reported that this action has not resulted in more effective inspections or improved working conditions. USTR has reported that new forms of abusive contracting remain problematic to the protection of labor rights in Colombia. For example, according to USTR, although the number of illegal cooperatives has dropped, many employers have shifted to various forms of subcontracting, including entities known as simplified stock companies, to avoid direct employment relationships. ILO, NGO, and labor union officials we met with described this form of subcontracting as a legal loophole that is used to undermine workers’ rights. According to union officials, a law passed by the Colombian government in 2013 prohibiting the misuse of cooperatives was also intended to increase formalized employment and encourage companies to hire workers directly instead of as temporary labor. State’s 2013 Human Rights Report noted that the Colombian government generally enforced applicable labor laws and took steps to increase the enforcement of freedom-of-association laws. However, the report identified weaknesses in labor protections in Colombia, echoing concerns expressed to us by labor union and NGO representatives, related to labor inspections, collecting fines for labor violations, and employers’ use of outsourcing contracts. Ministry of Manpower and union officials we met with in Oman reported that collective bargaining and freedom of association are allowed by law and largely respected. However, State has raised concerns about the enforcement of labor law among Oman’s foreign worker population. State’s 2013 Human Rights Report notes that Oman’s Ministry of Manpower effectively enforces the labor law as it applies to Omani citizens but has not effectively enforced regulations related to working conditions and hours for foreign workers. Despite steps that the government of Peru has taken to address labor conditions, union and NGO officials we met with reported that enforcement of labor laws remains weak and labor conditions have not improved in certain respects. According to State, Peru’s labor laws place a 5-year limit on the continuous renewal of short-term labor contracts not leading to permanent employment in most sectors of the economy. However, State’s 2013 Human Rights Report notes that a sector-specific law covering nontraditional export sectors such as apparel exempts employers from this 5-year limit and allows them to hire workers through indefinite series of short-term contracts. Union officials we met during our fieldwork also reported poor labor conditions in Peru’s nontraditional export sectors, which these officials described as not affording the same labor rights as other sectors in the economy. More generally, according to NGO officials, Peru’s large informal sector makes it difficult for the government to enforce labor rights, because informal companies, which are not registered with the government and therefore are not subject to labor inspections, typically do not follow labor laws. State’s 2013 Human Rights Report also identifies continuing labor concerns in Peru’s nontraditional export sectors, such as the effect of the use of temporary service contracts and subcontracting on workers’ freedom of association. State’s report also notes that in 2013, penalties for violations of freedom of association and collective bargaining were rarely enforced, the judicial process was prolonged, and employers were seldom penalized for dismissing workers involved in trade union activities. In addition, union officials whom we met with stated that Peru’s agricultural law allows for workers to be paid less than the legal minimum wage and to be continuously hired on temporary contracts for 2- to 3- month periods. These officials stated that this limits workers’ ability to collectively bargain and exercise freedom of association, because of fear that if they join a union, their contracts will not be renewed. DOL has accepted five of six labor submissions—that is, formal complaints alleging that FTA labor provisions had been violated—that it and has closed one submission as resolved. has received since 2008However, DOL did not meet its original deadlines for reviewing and reporting on any of the submissions, exceeding the established 6-month submission review time frame by an average of about nine months and possibly delaying resolution of the submissions. Stakeholders whom we interviewed in the selected five partner countries generally expressed a lack of awareness or understanding of DOL’s submission process, which may have limited the number of submissions filed. Moreover, stakeholders we interviewed expressed concerns about delays in resolving labor concerns detailed in the submissions for Guatemala and Honduras. According to DOL, FTA labor provisions establish official processes for receiving submissions from interested organizations that believe a trading partner is not fulfilling its labor commitments. In the United States, DOL generally receives and reviews submissions made under the labor chapters of each trade agreement. DOL issued procedural guidelines pertaining to this function via publication of a Federal Register notice in 2006.acceptance and investigation of submissions. For example, DOL shall determine whether to accept a submission within 60 days and is to consider, among other things, whether it contains statements that, if substantiated, would constitute a failure by the other party to comply with its commitments under an FTA. If DOL determines that the circumstances require, the 60-day timeframe can be extended. According to DOL, its decision to review a public submission does not indicate any determination as to the validity or accuracy of the allegations contained in the submission; the submission’s merit is addressed in the public report that follows DOL’s review and analysis. DOL officials noted that although DOL has responsibility for investigating submissions, USTR, DOL, and State work together to engage diplomatically to address concerns. The guidelines contain deadlines and substantive criteria for Figure 2 illustrates DOL’s submission process, including the established time frames for accepting and reviewing submissions. Since 2008, DOL has accepted labor submissions filed under the Bahrain, Dominican Republic, Guatemala, Honduras, and Peru FTAs and has closed the Peru FTA submission. The submissions for Bahrain, the Dominican Republic, and Guatemala—accepted in 2011, 2012, and 2008, respectively—remain open while the U.S. government engages with the governments to address the concerns that the submissions raised. The submission for Honduras, accepted in 2012, remains open while DOL reviews its allegations. Figure 3 presents information about the five submissions (see app. III for further details). Although DOL accepted most of the submissions it received within the 60- day time frame established by its guidelines, it did not complete its reviews of the submission within the established 180-day time frame. A DOL official we met with indicated that DOL cannot complete within the 180-day time frame the types of comprehensive investigations and reports it has been providing. For each of the submissions, DOL determined at the end of the original 180-day review time frame to extend the review period and reported its findings and recommendations an average of 262 days after the original time frame had ended. According to USTR and DOL, before DOL publishes its review of a submission, both agencies engage informally with the relevant partner country to explore ways to address the concerns raised in the submission. However, USTR and DOL do not request formal consultations with a partner country to address DOL’s recommendations until DOL has issued its report. As a result, extensions of DOL’s review time frame may delay resolution of the submission. Bahrain. DOL received the Bahrain submission on April 21, 2011, and accepted it on June 10, 2011, or 50 days later. In December 2011, DOL extended the submission review period to consider and review additional information received from the government of Bahrain and Bahraini workers, amendments made to the Bahraini Trade Union Law, and labor-related developments in international forums. DOL issued its report on December 20, 2012, 559 days after accepting the submission. Dominican Republic. DOL received the Dominican Republic submission on December 22, 2011, and accepted it on February 22, 2012, or 62 days later. In August 2012, DOL extended the review time frame to consider public comments about the submission as well as information gathered by a Bureau of International Labor Affairs delegation during a visit to the Dominican Republic. DOL issued its report on September 27, 2013, 583 days after accepting the submission. Guatemala. DOL received the Guatemala submission on April 23, 2008, and accepted it on June 12, 2008, or 50 days later. DOL issued its report on January 16, 2009, 218 days after accepting the Guatemala submission. Honduras. DOL received the submission on March 26, 2012, and accepted it on May 14, 2012, or 49 days later. On November 2, 2012, 5 days before DOL’s 180-day reporting deadline, DOL extended its review because of the scope of the submission, the scope of the alleged labor law violations, and the large amounts of information received from the Honduran government and stakeholders. As of September 2014, DOL officials were continuing to review documentation of the allegations and prepare their report. DOL officials were unable to estimate when they would issue a public report. Peru. DOL received the submission for Peru on December 29, 2010, and accepted it on July 19, 2011, or 202 days later. On January 20, 2012, 185 days after accepting the submission, DOL concluded that circumstances required an extension of time for a thorough and detailed review of the Peru submission. DOL issued its report for the Peru submission on August 30, 2012, 408 days after accepting it. Although DOL has periodically reviewed and updated the submission process since establishing it in 1994, DOL officials told us that they have not reviewed or adjusted the submission review time frame to reflect the time it takes DOL to issue its reports after accepting the submissions. DOL’s extensions of each submission review period since 2008 have shown this time frame to be unrealistic. Federal standards for internal control call on agency management to monitor and assess the effectiveness and efficiency of their operations over time and to promptly resolve any deficiencies. Our interviews with union and other nongovernmental stakeholders in the selected partner countries suggested that little or no awareness and understanding of the FTA labor submission process may have affected the number of submissions filed. Union representatives we interviewed in our five case study countries—Colombia, El Salvador, Guatemala, Oman, and Peru—were generally either unaware of DOL’s labor submission process or considered it difficult to understand and use. For example, a union representative in Colombia stated that the union would have filed a submission if the representative had known about the process. In another example, representatives of one of the larger El Salvador unions— GMIES, an NGO that monitors the actions of the Salvadorian government—stated that it was difficult for a “typical worker” to file a submission with DOL and that the information required for the submission is generally difficult, if not impossible, to obtain and document. A small number of union representatives who were aware of the process attributed their knowledge to information received from AFL-CIO or their country’s Solidarity Center. For instance, in Guatemala, a representative of a union that had signed the current Guatemala submission stated that without the help of the Solidarity Center, the union would have been unable to locate submission instructions and file the submission. Moreover, although the government officials we interviewed in the five countries knew of the labor submission process, only in El Salvador did these officials express an understanding of how DOL evaluates a submission and conducts its fact-finding investigations. For example, the Guatemalan Ministry of Labor officials we interviewed could not provide information about how the process works or describe the criteria DOL uses to evaluate a submission’s merits. The Ministry of Manpower officials whom we interviewed in Oman did not understand the purpose of the submission process or how submissions could be filed. In addition, U.S. agencies have made minimal efforts to publicize the process for nongovernment stakeholders, who are most likely to file submissions. Federal standards for internal control pertaining to information and communication call for agencies to have relevant, reliable, and timely communications relating to internal as well as external events, to ensure control of their operations. According to DOL officials, DOL relies exclusively on its website and its 2006 Federal Register notice to inform the public about the process. Moreover, U.S. officials whom we interviewed in the countries we visited indicated that they do not advertise the existence of the submission process. Without additional efforts to inform nongovernment stakeholders in FTA partner countries about the DOL labor submission process, U.S. agencies are limited in their ability to use the submission process as a means of holding FTA partners accountable for fulfilling their labor commitments. Several stakeholders we spoke with expressed concerns about delays in U.S. agencies’ efforts to resolve the matters raised in the Guatemala and Honduras submissions. Because of the complexity of these matters, resolving them has proven difficult and time-consuming, according to USTR. Moreover, according to USTR, no process or time frames for U.S. agencies’ efforts to engage diplomatically with FTA partners to resolve labor matters related to DOL submission reports are outlined in any active FTA that had entered into force or in U.S. implementing guidance. According to USTR and DOL, they engage with FTA partners through informal and formal communication to resolve any FTA concerns. For example, according to a USTR official, USTR has informally engaged with FTA partners—through labor affairs council discussions, telephone conversations, and e-mail exchanges—for an average of 6 months regarding concerns raised in a submission before requesting formal consultations with the partner government. If an FTA partner does not address USTR’s and DOL’s labor concerns or is unwilling to informally engage, USTR and DOL may request formal consultations under the FTA’s labor chapter. USTR officials stated that an FTA partner’s willingness to engage determines in part how quickly potential labor violations are addressed. According to USTR officials, some FTA partners are willing to engage informally to resolve labor violations, while other partners engage only after USTR and DOL have jointly requested formal consultations under the FTA’s labor chapter. Examples of stakeholders’ concerns include the following. AFL-CIO representatives in Washington, D.C., who were involved with the Honduras and Guatemala submissions expressed appreciation of DOL’s care in investigating cases and were eager to provide requisite evidence. However, they expressed disappointment that both cases have taken longer than they anticipated, stating that justice delayed can mean justice denied when workers’ livelihoods are at stake. A union representative in Guatemala expressed disappointment that 6 years after the submission was filed, it had not been resolved. He also said that the steps outlined in the enforcement plan were mainly administrative and did not address all complaints detailed in the submission. As a result, according to the union representative, the conditions of workers identified in the submission have not improved. Four government officials from CAFTA-DR partner countries whom we interviewed in Washington, D.C., described DOL’s submission process as lacking fairness and transparency. According to these representatives, DOL does not give partner governments clear information about next steps or access to evidence supporting the submissions and deprives the countries of the opportunity to respond to allegations presented by any member of the public. The CAFTA- DR representatives added that in their opinion, DOL conducts the submission process in an adversarial manner and that the process therefore does not function as a mechanism for addressing concerns on a cooperative basis. The representatives noted that this, in turn, can delay or complicate resolution of problems. Since 2009, USTR and DOL, with State’s assistance, have taken steps intended to strengthen monitoring and enforcement of FTA partners’ compliance with FTA labor provisions, but their monitoring and enforcement remain limited. After USTR announced in July 2009 that the agencies would take a more proactive, interagency approach to monitoring and enforcing FTA labor provisions, USTR and DOL developed mechanisms to track labor conditions and practices in priority trade partner countries. They also took some proactive monitoring steps with several FTA partners. However, although they jointly address labor submissions and work together to engage with partner countries regarding labor concerns, USTR and DOL have not established a coordinated strategic approach to systematically assess and address other possible inconsistencies with the FTA labor provisions, such as concerns that DOL identifies in internal management reports. The lack of such an approach may be inconsistent with USTR’s 2009 announcement as well as with best practices for interagency collaboration. Agency officials cited limited funding and staffing as constraints on their ability to monitor and enforce FTA labor provisions. USTR’s, DOL’s, and State’s annual reports to Congress provide information about labor conditions in partner countries. However, reflecting in part USTR’s and DOL’s limited monitoring and enforcement of FTA labor provisions, the reports generally do not detail concerns about the implementation of FTA labor provisions by partner countries that have not been the subject of labor submissions. . In 2009, USTR made a public statement pledging to address weaknesses in monitoring and enforcement of FTA labor provisions such as those we identified in our July 2009 report. In its statement, USTR—which has principal responsibility for monitoring and enforcing statutory trade agreements—publicly announced its intention to adopt a proactive, interagency approach to monitoring and enforcing FTA labor provisions in cooperation with DOL and State. According to USTR’s announcement, for example, the agencies would no longer enforce labor obligations only in response to complaints, would hold trading partners to their obligations on labor standards, and would work in close partnership to immediately identify and investigate labor violations. In addition, USTR’s July 2009 announcement stated that the agencies would deploy resources more effectively to identify and solve problems at the source and would jointly engage with governments of countries that violate the rules, to quickly restore workers’ rights, assist partner countries to find a way to fix identified labor problems, and pursue legal remedies when other options are closed. A broad range of activities underpins federal monitoring and enforcement efforts. For the purposes of this report, “monitoring” refers to federal activities that are undertaken to identify instances where foreign laws, regulations, and practices may be inconsistent with trade agreement provisions; “enforcement” refers to actions taken by USTR to secure foreign compliance with trade agreements, which can include initiating dispute settlement procedures that certain trade agreements provide. When agencies identify possible inconsistencies with FTA provisions, agencies take a variety of actions to encourage and obtain foreign compliance with trade agreements. As we previously reported, according to records and staff at USTR and other agencies, monitoring and enforcement of trade agreements typically involves several key steps: identifying compliance problems, setting priorities, gathering and analyzing information, developing and implementing responses, taking actions to enforce agreements, and coordinating with other agencies.Systematic implementation of these key steps is necessary to ensure that the agencies effectively and efficiently accomplish their objective and help ensure that management's directives are carried out. In our July 2009 report, we found that U.S. agencies had not proactively monitored FTA partners’ compliance with labor commitments and did not consider that they were required to do so. Moreover, U.S. interaction with partners regarding labor issues after FTAs entered into force had been limited, usually in part because of the low priority attached to this function. Further, we found that U.S. agencies generally gave attention to problematic labor situations in certain FTA partners’ export sectors only after media exposure of the situations. GAO/NSIAD-00-76. To identify these five steps, we reviewed agency records and GAO-09-439. engaged in discussions with partner countries, gathering information about labor conditions in the countries and responding to FTA labor concerns. In doing so, the agencies have addressed some of the typical key elements of monitoring and enforcement that we previously identified, such as gathering and analyzing information and setting priorities. However, the agencies’ approach for countries other than Jordan, Panama, and, to some extent, Colombia and Peru generally does not incorporate other key elements, such as identifying compliance problems, developing and implementing responses, and taking enforcement actions. In October 2012, USTR established the Trade Policy Staff Committee (TPSC) Subcommittee on Labor Monitoring and Enforcement to focus on monitoring and enforcing labor provisions in partner countries, with members from various agencies, including DOL and State. USTR charged the subcommittee with monitoring and enforcing labor issues in 20 FTA partner countries as well as 146 countries that participate in U.S. trade preference programs. According to a USTR document, the subcommittee participates in efforts to enforce labor obligations, such as through submission reviews, consultation, and dispute settlement under FTAs. However, according to DOL and State staff who have participated in the Subcommittee on Labor Monitoring and Enforcement, the subcommittee’s meetings generally serve as an information-sharing mechanism rather than a monitoring and enforcement mechanism. State participants described the subcommittee as an interagency process for reviewing and discussing labor conditions and assessing risks in trade partner countries, including FTA and trade preference program partners. State added that although this process does not entail a regular, detailed review of each FTA partner country’s compliance with labor obligations, it facilitates discussion of concerns and development of next steps to address these concerns based on input from each agency. A USTR official stated that when the subcommittee met in October 2012 and February 2013, it decided that an emphasis on overseeing labor conditions in parts of Africa and Haiti was needed and that monitoring of FTA labor provisions’ implementation would be based mainly on addressing DOL labor submissions. USTR officials noted that information sharing is a key part of its process to assess labor conditions in FTA partner countries relative to their commitments and to identify matters that are appropriate for further action. According to a USTR official, work by the subcommittee and ad hoc interagency country teams has led to increased actions and engagement in countries such as Jordan, Panama, Peru, and Morocco, where, according to USTR, it has conducted, or expects to conduct, high-level labor meetings and monitoring trips in 2014. Further, according to USTR officials, USTR has used input from the subcommittee to develop a matrix to more comprehensively track monitoring activities and technical capacity-building assistance across FTA partner countries. USTR officials indicated that this tool has served as a point of departure for soliciting the subcommittee’s input on priorities and coordination of future activities. In addition, according to USTR officials, USTR regularly coordinates and communicates with other agencies, industry, labor unions, the ILO and other external stakeholders to identify possible inconsistencies with the labor provisions of trade agreements. USTR officials said that the agencies routinely address labor issues identified in the subcommittee or by other stakeholders through bilateral consultations and formally established FTA mechanisms, such as the labor affairs councils established by most FTAs and the FTA free trade commissions, which are the main forums for bilateral dialogue about FTA implementation for each FTA. USTR has publicly reported on such meetings, as we recommended in 2009. However, the labor affairs councils for most FTAs have in most cases met only once and in two cases have not met at all since the FTAs entered into force.the FTA free trade commissions’ discussions reportedly do not address labor issues in depth. For example, officials from the Ministry of Labor in Peru indicated that in the last free trade commission meeting they attended, labor issues were not substantively addressed; instead members of the commission agreed that a meeting on the topic could be held later in the year. USTR officials stated that monitoring and enforcement covers a large spectrum of activities and that in some cases USTR has taken steps to resolve issues in countries where submissions have not been filed. For example, according to USTR officials, USTR and DOL negotiated an implementation plan in 2012 to address concerns regarding foreign workers in Jordan’s garment sector and have continued to monitor Jordan’s implementation of its FTA labor commitments. USTR officials also cited as examples of proactive monitoring and enforcement USTR’s engagement with Colombia regarding the Action Plan, cooperation with Panama in passing administrative and legal changes to address labor concerns as part of the FTA ratification process in 2011, and discussions with Peru regarding commitments that the government made in 2007 to improve respect of labor rights for temporary and subcontracted workers. Our analysis shows that in Jordan and Panama, USTR’s and DOL’s activities have addressed the typical key elements of monitoring and enforcement. Regarding Jordan, USTR documents indicate that USTR and DOL have addressed all key elements of monitoring and enforcement, such as by putting in place a concrete plan to fix an identified problem and taking steps to assure that the plan is implemented. Regarding Panama, USTR documents show that the agencies took steps to assure that Panama met its commitments both before and after the FTA entered into effect in 2012 and have been pursuing steps, such as holding several recent meetings, that may help in resolving outstanding concerns about implementation of FTA labor provisions. However, while USTR’s and DOL’s reported monitoring activities in Colombia and Peru can assist the countries to be better positioned to meet their FTA labor commitments and help to better inform USTR of labor concerns in these countries, the evidence that we examined does not demonstrate systematic implementation of the typical key elements for monitoring and enforcement of labor provisions. Colombia. Documentation that USTR and DOL provided, as well as evidence that we gathered in Colombia and in interviews with agency officials, indicates that the agencies took several of the key steps of monitoring and enforcement. For example, a report that USTR and DOL jointly prepared in 2014 showed that the agencies have been gathering and analyzing information, setting priorities, assessing implementation, and identifying compliance problems. However, we did not see evidence that a current plan is in place to address the outstanding concerns that USTR and DOL have identified. Peru. Documentation that USTR and DOL provided demonstrates a systematic approach to some, but not all, of the monitoring and enforcement steps that we identified. Specifically, both agencies have engaged to some extent with Peru regarding labor matters in the 5 years since the Peru FTA went into effect. The agencies’ documentation shows that at least some of their efforts to gather and analyze information regarding Peru relate to verifying the government’s implementation and enforcement of previous reforms, such as reforms of its labor inspection regime and of its legal framework for temporary employment and subcontracting. USTR and DOL documents also show that the agencies have identified several possible compliance concerns and have engaged with the government by scheduling meetings and asking questions. However, we did not see evidence of a plan to resolve outstanding U.S. concerns. Further, in July 2014, Peru announced that it had enacted legal changes that rolled back previously implemented improvements in its labor laws in areas such as health and safety protection for workers, to improve the business climate and attract investment. To strengthen its monitoring and enforcement of FTA labor provisions, in 2012, DOL established the Monitoring and Enforcement of Trade Agreements Division within the Bureau of International Labor Affairs’ Office of Trade and Labor Affairs. The division’s objectives are to ensure that partner governments (1) effectively enforce their labor laws and implement policies that protect worker rights; (2) understand their commitments under the FTA labor chapters; and (3) revise or adopt laws, regulations, and policies consistent with international labor standards. The division mainly monitors implementation of FTA labor provisions through its review of FTA labor submissions and through the use of internal documents called management reports, according to DOL officials. However, DOL uses management reports to identify labor concerns rather than to fully assess consistency with FTA labor provisions. FTA labor submissions. DOL’s review of the five FTA labor submissions—for Bahrain, the Dominican Republic, Guatemala, Honduras, and Peru—that it has accepted since 2008 have led to recommendations in three of its submission reports that the partners address alleged violations of FTA labor provisions. According to USTR officials, labor submissions are a central component of the FTAs’ framework for monitoring and enforcement of labor obligations and USTR and DOL invest extensive time and resources in addressing submissions. For the Bahrain and Guatemala submissions, DOL, with USTR, formally requested labor consultations under the FTAs’ labor chapters to address the concerns raised in the submissions. In Guatemala’s case, after consultations failed to address the concerns, the U.S. government invoked dispute settlement proceedings; during these proceedings, the two governments negotiated the Enforcement Plan, outlining steps that Guatemala agreed to take. (See app. III for a discussion of DOL’s monitoring of Guatemala’s implementation of the Enforcement Plan.) Internal management reports. In 2012, according to DOL staff, DOL began consistently monitoring FTA-related labor issues in 13 FTA partner countries. The management reports provide, among other information, a synopsis of labor conditions in partner countries based on sources such as State, ILO, the International Trade Union Confederation, local stakeholders, and the press. The reports include updated contact information for each partner and identify labor conditions or practices—for example, related to freedom of association and collective bargaining—that may be inconsistent with the FTA labor provisions. They also outline steps that DOL staff propose to take to address any identified concerns, subject to approval and resource availability. According to DOL officials, DOL uses the management reports to identify labor concerns, rather than potential FTA violations for enforcement purposes, and to facilitate engagement on technical assistance projects with FTA partner countries. DOL officials noted that claiming and proving a violation of FTA labor provisions would be very costly and legally complicated. The officials explained that when a management report identifies labor concerns, DOL may request formal or informal consultations with the partner country’s ministry of labor to discuss these concerns and will attempt to cooperatively address them. The officials also noted that a persistent condition may result in a submission from a stakeholder. USTR and DOL work together on an ad hoc basis to address labor concerns identified in submissions and to engage with partner countries regarding labor matters. However, the agencies have not developed a coordinated strategic approach to systematically assess and address other possible inconsistencies with FTA labor provisions, such as the issues that DOL’s management reports identify, in other partner countries. This lack of a joint approach may be inconsistent with USTR’s 2009 statement that the agencies would work in close partnership to immediately identify and investigate labor violations. Further, while the agencies take steps such as gathering facts from credible and reliable sources and prioritizing their monitoring of the partner countries, they have not jointly operationalized other key steps that we previously identified as typical for monitoring and enforcement of trade agreements. For example, although DOL’s management reports are its primary means, other than submissions, of monitoring and identifying issues that may be inconsistent with FTA labor provisions, the agencies have not established a coordinated strategic approach to identify and carry out steps necessary to address issues identified in the reports. According to DOL, the management reports are available to USTR and State and the three agencies routinely discuss the reports’ contents. USTR confirmed that DOL shares information from the reports with the Subcommittee on Labor Monitoring and Enforcement for discussion and consideration. However, USTR officials noted that USTR regards the management reports as one of multiple information sources that it considers before deciding to engage with a country about a labor concern, rather than as an indicator of a need to engage with the country. USTR officials also noted that USTR views the management reports as a new internal DOL tool and is assessing how best to use these reports in the interagency process. Moreover, USTR, DOL, and State have differing perspectives on how to monitor and enforce FTA labor provisions, according to agency officials. According to a USTR official, each agency approaches monitoring and enforcement in relation to its mission, and as a result, some of the 13 countries that DOL has identified internally as priorities differ from countries that USTR has identified as priorities in the context of the FTAs. The USTR official stated that USTR must assess whether a labor issue constitutes a breach of obligations set forth in the relevant FTA before it pursues dispute settlement, whereas DOL and State—as observers of labor conditions and human rights, respectively—approach labor issues more strictly as labor rights concerns. According to a State official, USTR prefers to address identified partner countries’ labor issues without the intent to invoke the FTA. Our prior work identifying best practices for interagency collaboration has shown that agencies can enhance and sustain their collaborative efforts by engaging in eight practices. For example, to achieve a common outcome, collaborating agencies need to, among other things, not only define and articulate the outcome but also establish strategies that work in concert with their partners’ or are joint in nature. Such strategies help in aligning the partner agencies’ activities, core processes, and resources to accomplish the common outcome. Agency officials cited limited staffing and resources as constraints on their ability to proactively monitor and enforce implementation of FTA labor provisions as the number of FTAs increases, leading the agencies to focus most of their efforts in a few priority countries. The agencies described enforcement activities as particularly resource intensive. DOL officials told us that because they had to focus much of their available resources on enforcing the Colombia Labor Action Plan and the Guatemala Enforcement Plan, their ability to monitor and enforce FTA partners’ compliance with their FTAs’ labor provisions in the last year was limited. Given the staffing and resource constraints that USTR, DOL, and State officials cited, effective interagency collaboration—including joint strategies that assist in aligning partner agencies’ activities, core processes, and resources to more effectively accomplish the common outcome—is essential to maximize the agencies’ ability to monitor and enforce compliance with these provisions. USTR. Staffing and funding constraints have, at times, limited the office’s engagement with FTA partner countries regarding labor matters, according to USTR officials. Notably, according to the officials, recent sequestration-related cuts at USTR sharply limited travel. The officials told us that USTR’s Office of Labor Affairs has four staff members—an Assistant U.S. Trade Representative for Labor, two Deputy Assistant U.S. Trade Representatives for Labor, and the Director for Labor Affairs— whose responsibilities include, among others, negotiating labor provisions in new agreements such as the Trans-Pacific Partnership, overseeing labor matters for 20 FTA partner countries and 120 trade preference countries, and engaging with countries to address labor complaints. According to USTR officials, although the Office of Labor Affairs staff has doubled from two to four since 2008, the number of trade partner countries for which they are responsible has increased from 14 to 20. USTR staff stated that because so few staff are available, they cannot engage with partner countries regarding every labor issue identified and must address such issues in cooperation with other agencies. USTR staff also noted that they depend on DOL and State for day-to-day monitoring of labor conditions in partner countries. DOL. Resource constraints limit DOL’s ability to monitor implementation of FTA labor provisions except in partner countries that DOL has identified as priorities, such as countries cited in labor submissions, according to DOL officials. DOL reported that in fiscal years 2013 and 2014, the Monitoring and Enforcement of Trade Agreements Division had five to eight full-time staff, with primary responsibilities that include monitoring labor conditions in 20 FTA partners and addressing and following up on labor submissions—for example, assessing implementation of submission report recommendations and engaging in consultations with the FTA partner. DOL officials stated that over the past year, the division’s staff spent 80 percent of their work hours monitoring implementation of the Guatemala Enforcement Plan; following up on activities initiated under the Colombia Labor Action Plan; and addressing labor submissions for Honduras, the Dominican Republic, Bahrain, and The employees’ remaining work hours were available to Mexico.monitor and engage with the other 14 FTA partner countries. DOL officials expressed concern that challenges related to resource limitations will grow as the number of FTAs increases. For example, according to DOL and State officials, Vietnam is among the countries participating in the Trans-Pacific Partnership negotiations with a poor record of protecting labor rights. State. State has limited resources available to support USTR’s and DOL’s monitoring of FTA labor provisions, according to State officials. State’s Bureau of Democracy, Human Rights, and Labor (DRL) coordinates State’s in-country labor officers or labor reporting officers, who carry out regular monitoring and reporting and day-to-day interaction with foreign governments on labor matters. However, these staff are not responsible for monitoring implementation of labor provisions in the FTAs. State informs USTR and DOL about labor concerns identified in FTA partner countries, through reporting cables and other means, and supports them in investigating labor submissions and addressing related recommendations. DRL has seven staff at State’s headquarters in Washington, D.C., one of whom focuses on trade-related issues, who are supported by labor affairs officers and labor reporting officers at the embassies in each of the 20 FTA partner countries. State officials explained that each of these labor affairs officers and labor reporting officers has other responsibilities. Overall, the amount of time that these officers dedicated to labor issues varied from 5 percent (in Australia) to 75 percent (in Mexico). USTR, DOL, and State provide required annual reports to Congress that contain some information about labor conditions in FTA partner countries. However, the annual reports generally do not detail concerns about the implementation of FTA labor provisions by partner countries that have not been the subject of labor submissions, in part reflecting the agencies’ limited monitoring and enforcement of the provisions.statutory responsibility to report to Congress about trade agreement programs on an annual basis. According to DOL and State, they do not have such a responsibility, although some of their required reports include related information. USTR. Each year, USTR provides Congress with the current Trade Policy Agenda for the current year, as well as the Annual Report of the President of the United States on the Trade Agreement Programs. The agenda and the annual report include information about trade policy priorities and actions taken to implement FTAs. For example, the 2014 agenda indicates, among other things, that USTR will seek to ensure that trade partners meet their obligations related to labor rights, will focus on implementation of FTAs, and will work with key partner countries to address specific labor issues. The 2013 annual report describes, among other things, the status of the labor submissions regarding Bahrain, Guatemala, the Dominican Republic, and Honduras and USTR’s efforts to address the concerns that the submissions identify; in some cases, possible inconsistencies with the FTA may be discussed. However, the annual report generally does not detail concerns about the implementation of FTA labor provisions by partner countries that have not been the subject of labor submissions, reflecting in part USTR’s and DOL’s limited monitoring and enforcement of these provisions. Including appropriate information resulting from more extensive monitoring and enforcement could help inform Congress and other U.S. stakeholders about the extent to which trade partners are fulfilling their FTA labor commitments. DOL. DOL is required to report to Congress every 2 years regarding labor issues related to CAFTA-DR but is not required to report on the implementation of labor provisions in other FTAs. For CAFTA-DR, DOL is required to submit a biennial report to Congress on the progress made by the CAFTA-DR partner countries in implementing the labor chapter provisions and labor cooperation and capacity-building activities. DOL’s 2011 report on CAFTA-DR summarizes the progress made by each CAFTA-DR partner country in implementing these provisions and activities, although the report generally does not detail concerns about the implementation of CAFTA-DR labor provisions. State. Every year, State provides Congress with the annual Country Reports on Human Rights Practices, which covers all countries receiving assistance and all United Nations member states, including all U.S. FTA partners. Each of the country reports includes a section on labor issues, covering topics such as internationally recognized individual, civil, political, and worker rights, as set forth in the Universal Declaration of Human Rights and other international agreements. In addition, most of the country reports for FTA partner countries that we reviewed include information about unfavorable conditions faced by workers and any challenges to the partner countries’ implementation of their labor laws. The reports generally do not—and, according to State, are not intended to—detail concerns about the implementation of FTA labor provisions. The United States’ recent FTAs have served as means of securing commitments from trade partners to uphold and protect internationally recognized labor rights. Although the FTA partners we selected for our review have made some progress, with U.S. assistance, in implementing their FTA labor commitments, enforcement weaknesses and problematic labor conditions persist. In addition, nongovernment stakeholders we interviewed in partner countries had little or no awareness of the labor submission process that DOL established to allow such stakeholders to register concerns about FTA partners’ labor practices. Further, DOL’s extensions of its 6-month submission review time frame by an average of nine months per submission has shown the time frame to be unrealistic. Moreover, U.S. agencies’ work with the partners to resolve these concerns has in some cases been very time consuming. For example, 6 years after DOL received the Guatemala labor submission, the submission remains open, and according to U.S. agencies, Guatemala has not fully addressed the weaknesses in its labor law enforcement or the resulting hardships on workers. Further, although USTR and DOL jointly pledged in 2009 to adopt a more assertive, interagency approach to monitoring and enforcing FTA labor provisions, in practice the agencies systematically investigate possible inconsistencies with these provisions primarily in response to labor submissions. In addition, despite ongoing interaction between USTR and DOL—for example, in addressing submissions—they have not developed a strategic approach to jointly set priorities and coordinate efforts to respond to labor concerns such as those identified in the DOL management reports. Without such strategic coordination, and given constraints on resources, both agencies have focused their monitoring and enforcement activities, apart from addressing labor submissions, on a few priority countries. As a result, consistency with FTA labor provisions in most partner countries is generally not monitored and enforced systematically. Moreover, USTR may be limited in its ability to report to Congress regarding concerns about FTA partners’ implementation of their respective FTA labor commitments. To improve the capacity of the U.S. government to monitor and enforce FTA partners’ compliance with mutually agreed FTA labor provisions, we are making four recommendations to the U.S. Trade Representative and the Secretary of Labor. We recommend that DOL reevaluate and adjust, if necessary, its FTA labor submission review time frame to ensure that it more accurately reflects the time required to thoroughly investigate and to report on most labor submissions. We recommend that DOL take steps to better inform stakeholders in FTA partner countries about its FTA labor submission process. We recommend that USTR and DOL, in cooperation with State, establish a coordinated strategic approach to monitoring and enforcing FTA labor provisions, to ensure that they systematically assess the consistency of priority FTA partner countries’ laws, regulations, and practices with trade agreement labor provisions and address any identified concerns. We recommend that USTR ensure that the Annual Report of the President of the United States on the Trade Agreement Programs, which USTR provides each year to Congress, includes results of USTR’s and DOL’s efforts to proactively monitor partner countries’ compliance with FTA labor provisions. We provided a draft of our report to USTR, DOL, State, and USAID. USTR and DOL provided written comments, which are reproduced in appendixes V and VI, respectively. USTR, DOL, and State also provided extensive technical comments, which we incorporated or addressed as appropriate. USAID did not provide comments. In their written comments, USTR and DOL expressed general agreement with our recommendations. USTR wrote that it embraced the recommendation to improve coordination with the Departments of Labor and State, to identify and address areas of concern, and to ensure that its reporting to Congress effectively reflects the results of these efforts. DOL committed to reevaluate its internal submission review process, in consultation with USTR and State, to determine whether internal adjustments may be necessary. DOL also said that it will evaluate additional available actions to expand its ability to inform stakeholders in FTA partner countries about the FTA labor submission process. Finally, DOL said that it will evaluate additional options to increase its proactive monitoring and enforcement of labor provisions in FTAs and its coordination with USTR and State on such issues. Nevertheless, USTR and DOL took issue with our findings that the agencies do not systematically monitor and enforce labor provisions for all FTA partners and lack a coordinated strategic approach to monitoring and enforcement. Although we made some adjustments in response to new information that USTR and DOL provided with their comments, we maintain that, in general, the two agencies have not systematically implemented all key elements of monitoring and enforcement with regard to FTA labor provisions. (See app. V for our full response to USTR’s written comments and descriptions of our adjustments to the report in response.) We acknowledge that USTR, DOL, and State generally collaborate in engaging with partner countries on labor issues and in addressing submissions. However, the evidence that we reviewed, such as agendas for interagency meetings and our interviews with USTR and DOL staff, did not show that the agencies have developed a coordinated, strategic approach to systematically address possible inconsistencies with FTA labor provisions in most partner countries that have not been the subject of labor submissions. For example, we did not see evidence of a coordinated approach to address issues such as those identified by USTR in the partner countries it designates as high risk or that DOL identifies in its management reports. As agreed with your office, unless you publicly announce the content of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the U.S. Trade Representative, the Secretary of Labor, the Secretary of State, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. This report examines (1) steps that selected partner countries have taken, and U.S. assistance they have received, to implement free trade agreement (FTA) labor provisions and other labor initiatives and the reported results of such steps; (2) complaints—known as submissions— about possible violations of FTA labor provisions that DOL has accepted and any problems related to the submission process; and (3) the extent to which the Office of the U.S. Trade Representative (USTR), Department of Labor (DOL), and Department of State (State) monitor and enforce partner countries’ implementation of FTA labor provisions and report results to Congress. In addition, appendix II describes reported violence against labor unionists in selected FTA partner countries as well as steps that the countries have taken to address such occurrences. Appendix III describes U.S. agencies’ efforts to monitor implementation of other labor initiatives. Appendix IV describes the status of labor submissions received by the Department of Labor. We chose to concentrate our review on four FTAs and five partners to these FTAs, so that we could examine the unique set of circumstances for FTA partner countries with some specificity. The FTAs and partner countries on which we chose to focus—the Dominican Republic-Central America-United States (CAFTA-DR), among whose six partner countries we selected El Salvador and Guatemala; the Colombia FTA; the Oman FTA; and the Peru FTA—are FTAs that contain labor provisions and countries with regional dispersion across Central America, South America, and the Middle East. We also selected CAFTA-DR because of the CAFTA-DR White Paper labor initiatives, and we selected El Salvador and Guatemala among CAFTA-DR countries because of the extent of U.S. assistance for labor programs and, in Guatemala’s case, the FTA- related Enforcement Plan. In addition, we selected the Colombia FTA because of the Labor Action Plan, and we chose the Colombia and Peru FTAs because they contain language echoing the Bipartisan Trade Agreement of May 10, 2007, popularly known as the May 10th Agreement. However, the results of our review of these selected FTAs and partner countries cannot be generalized to all FTAs and partner countries. In gathering information for each of our objectives, we engaged in three types of activities: We obtained information and perspectives from U.S. government, foreign government, nongovernmental organization (NGO), labor union, and private sector officials; stakeholders such as umbrella business associations; and experts. We obtained information and analysis from legal and secondary literature sources. We obtained information through visits to partner countries. During our visits to Colombia, El Salvador, Guatemala, Oman, and Peru, we met with U.S. officials; foreign government officials responsible for the implementation of labor provisions of the FTAs and other labor initiatives; umbrella business groups, such as chambers of commerce; officials of international organizations such as the International Labour Organization (ILO), trade unions, NGOs, and other subject matter experts. Additionally, we visited Costa Rica to meet with ILO officials in their Central America regional office, in San Jose. The views expressed by these officials and organizations cannot be generalized to all officials or organizations knowledgeable about labor provisions in the selected FTAs. To examine the steps that the selected FTA partner countries have taken to implement labor protection commitments under the respective agreements and other labor initiatives in the context of the respective FTAs, as well as the reported results of these steps, we obtained, reviewed, and analyzed documents from a variety of sources, including the four selected FTAs and their associated labor annexes as well as the CAFTA-DR White Paper and the Colombia Labor Action Plan. For this analysis, we included steps taken by partner countries, beginning with FTA negotiations for each FTA through May 2014. We also reviewed congressionally mandated reports, such as State’s Country Reports on Human Rights Practices (Human Rights Reports), of which the 2013 reports were the latest available; USTR’s annual trade agenda and trade report; and DOL’s biennial Progress in Implementing Chapter 16 (Labor) and Capacity Building under the Dominican Republic-Central America- United States Free Trade Agreement. In addition, we reviewed reports submitted to Congress in conjunction with FTA implementing legislation, such as DOL’s Labor Rights Reports for each of our selected FTAs. Additionally, we interviewed officials from DOL’s International Labor Affairs Bureau and the Office of Trade and Labor Affairs; USTR’s Labor Affairs office; and State’s Bureau of Democracy, Human Rights, and Labor. We also interviewed State desk officers responsible for selected partner countries and labor, political, and economic officers at U.S. embassies. In each of the selected countries, except Colombia, we interviewed officials from the relevant ministries, including the ministry of labor. (Colombia’s Ministry of Labor chose to provide written responses to our questions.) We did not independently identify or evaluate FTA partner countries’ enforcement or compliance with laws and procedures but rather relied on evidence obtained from U.S. and partner government as well as stakeholder sources. Because of the ILO’s role in interpreting, assessing, and improving signatories’ compliance with ILO Conventions and Fundamental Principles, we conducted a series of meetings at the ILO in Geneva as well as with umbrella organizations participating in the ILO’s tripartite (government-business-labor) governance structure. To examine U.S. funding for labor-related assistance projects, we collected data on such funding obligations—from the date when Congress passed the respective FTA implementing legislation through 2013—from relevant officials at State, DOL, and USAID and publicly available data on trade-related labor assistance from USAID’s Trade Capacity Building Database. We assessed the reliability of the data by (1) interviewing agency officials knowledgeable about the data sources and (2) tracing the data to source documents. We determined that the data were sufficiently reliable for the purposes of describing U.S. assistance for labor provisions in FTA countries. In addition, to identify any changes in levels of reported violence against workers exercising labor rights in the selected countries and the partner governments’ responses (see app. II), we obtained, reviewed, and analyzed documents from USTR, DOL, and State, such as State’s Human Rights Reports and DOL’s Labor Rights Reports. During our fieldwork in each selected country, we interviewed U.S. and foreign government officials, labor unions, and NGOs to learn of any violence against unionists. Of the countries we selected for review, violence against unionists was reported only in Colombia and Guatemala. During our fieldwork in Colombia, we interviewed and obtained information from entities responsible for collecting and reporting data on violence against unionists, including the Colombian Prosecutor General’s office; ENS (Escuela Nacional Sindical)—a labor rights NGO—and State’s Human Rights and Labor officers at the U.S. embassy in Bogota. We assessed the reliability of ENS’s and the Colombian Prosecutor General’s data by (1) interviewing officials from each entity about their criteria and data collection methodology for determining whether a victim’s union activity was a motive in the killing and (2) interviewing State’s Human Rights and Labor officers, who report ENS and Colombia’s Prosecutor General’s data in the annual Human Rights Report for Colombia. We determined the data to be sufficiently reliable for the purposes of describing reported violence against unionists in Colombia. In Guatemala, we interviewed, and obtained information from, the Guatemalan Prosecutor General’s Office, which is responsible for prosecuting crimes against unionists, and the Guatemalan Ministry of the Interior, which investigates crimes against unionists at the direction of the Prosecutor General’s Office. Because Guatemala did not collect data on violence against unionists, we did not review such data on homicide rates over time. We also interviewed State’s political officer responsible for labor affairs at its embassy in Guatemala City, labor unions, and NGO officials. We reviewed ILO reports and discussed steps by Colombia and Guatemala to address violence with ILO officials. Our analysis was based on reputable secondary sources. We did not make any independent determination regarding the merit of any evidence of violence against unionists. To examine labor submissions that have been filed under FTA agreements and any problems related to the submission process, we obtained, reviewed, and analyzed documents, including each labor submission filed with DOL. We interviewed officials from DOL’s Office of Trade and Labor Affairs—the office responsible for investigating and reporting on submissions—as well as USTR and State officials who review DOL’s reports and are involved in following up with FTA partner countries, if needed. During our fieldwork, we interviewed union and NGO representatives involved in filing submissions, as well as relevant government ministries, including the Ministries of Labor in Peru and Guatemala. To examine the extent to which USTR, DOL, and State monitor and enforce implementation of FTA labor provisions and associated commitments (see app. III) and report results to Congress, we requested, reviewed, and analyzed documents from each agency. These documents included strategic plans and documents reflecting monitoring activities, such as DOL’s management reports for 13 selected FTA partner countries, State cables, and USTR monitoring documents. In addition, we interviewed selected members of the USTR-chaired Trade Policy Staff Committee and its subcommittee on FTA labor monitoring and enforcement. To examine the resources that USTR, DOL, and State dedicate to monitoring labor provisions in FTAs, we obtained and analyzed data on staffing and other resources such as travel. We interviewed officials from DOL’s International Labor Affairs Bureau and its Office of Trade and Labor Affairs; USTR; and State’s Bureau of Democracy, Human Rights, and Labor, as well as State’s in-country labor, labor reporting, political, and economic officers at U.S. embassies during our fieldwork. In addition, during our fieldwork, we interviewed officials from the partner countries’ various ministries, including the ministries of labor, as well as representatives from labor unions and NGO officials implementing programs funded by U.S. agencies. To determine whether USTR, DOL, and State report the results of their monitoring activities to Congress, we reviewed and analyzed the agencies’ reports, such as USTR’s Trade Policy Agenda and Annual Report (2009 to 2013), DOL Annual Performance Reports (2010 to 2013), and State’s Human Rights Reports (2009 to 2013). We conducted this performance audit work from May 2013 to November 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. U.S. and Colombian officials and representatives of unions, nongovernmental organizations (NGO), and private sector groups we met with reported that violence against unionists continues to exist in Colombia, which union leaders and NGOs reported undermines workers’ ability to freely associate because of the fear that their union activities may lead them to become victims of violence. NGO officials in Colombia reported that although murders of unionists are a serious concern, threats of violence against union members also create a significant deterrent to workers organizing, with one NGO official noting a greater impact in rural areas than in major cities. According to this NGO official, authorities are less likely to have the capacity to investigate and respond to threats made in rural areas, with the result that victims flee the area or disengage from labor activities. Data collected by the NGO Escuela Nacional Sindical (ENS) and the Colombia Prosecutor General’s Office, as reported in the U.S. Department of State’s Human Rights Reports, show that murders of trade unionists have generally decreased over the past decade, although homicides continue to occur (see fig. 4). as of July 2014 ENS had recorded 35 murders in 2013, compared with 102 murders of union members and labor activists in 2003. This trend continued in the period since the Labor Action Plan—which contains specific commitments to address labor violence—was announced in 2011. Labor union officials acknowledged the number of homicides has decreased, but they noted that violence continues to exist. ENS and Colombia’s Prosecutor General’s Office have used different methodologies to determine whether a victim was murdered because of union affiliation. In some cases, ENS has considered a murder union related, while the Prosecutor General’s Office has classified the murder as unrelated to union activities. For more information on the differences in data collection methodologies, see Congressional Research Service, U.S.- Colombia Free Trade Agreement: Labor Issues, RL34759 (Washington, D.C.: January 2012). The government of Colombia committed to a number of reforms related to labor violence in April 2011 in the Labor Action Plan. Such commitments included broadening the scope of the government of Colombia’s protection program, Unidad Nacional de Protección (UNP), to include labor activists, union members, and people engaging in efforts to form a union, increasing the budget for the additional resources necessary to support the expansion of the protection program, eliminating the backlog of protection applications awaiting risk assessments, completing future risk assessments within a 30-day period (as described below), and reforming the Colombian interagency committee that reviews risk assessments. In addition, Colombia committed to criminal justice reforms in the Labor Action Plan, which included assigning 95 additional full-time judicial police investigators to support the investigations of criminal cases involving union members and activists; analyzing closed cases of union member homicides to determine patterns relating to targets, criminal methods, and any evidence of motives; identifying budgetary needs for training investigators and prosecutors on issues related to labor cases; and instructing investigators to determine whether a homicide victim was an active or retired union member, or was actively engaged in union formation and organization, during the initial phase of the investigation. According to the Office of the U.S. Trade Representative (USTR) and the Department of Labor (DOL), in accordance with Colombia’s Labor Action Plan commitments, the Colombian government has implemented changes to a number of institutions and programs. For example, union members, in addition to union leaders, are now included in the UNP’s jurisdiction. As of April 2014, the UNP protects more than 670 unionists, with about 24 percent of its budget dedicated to the protection of unionists and labor activists. Additionally, USTR has reported that the Colombian government reformed the risk assessment process at the UNP and eliminated the backlog of hundreds of applicants to the protection program. During our fieldwork in Colombia, UNP officials described changes to the risk assessment process. For example, when an application is received, instead of a single individual investigating the case, as occurred before the process’s reform, a committee now conducts interviews and fieldwork to investigate the nature of the threat and determine a risk score. The case is then forwarded to another committee, comprising representatives from organized labor, other vulnerable populations, and government agencies, that determines the risk level in each case as ordinary, extraordinary, or extreme. UNP officials reported that, based on the outcome of the risk assessment, protection measures ranging from providing a cell phone to providing an armored car and armed bodyguards are taken. State’s 2013 Human Rights Report notes that between January 1 and October 31, 2013, the UNP conducted 565 risk assessments of union leaders or members. Of those, the UNP classified 203, about 36 percent, as having an extraordinary threat or extreme threat and provided the leaders or members with protection measures. According to UNP officials, prior to the Labor Action Plan–related reforms, about 10 to 15 percent of risk applicants were determined to be under extraordinary or extreme risk. Additionally, according to State, approximately one-half of the unionists enrolled in the program were provided with “hard” protection measures that included a bodyguard. According to USTR, Colombia increased the budget of its Prosecutor General’s Office, in part to investigate and prosecute cases involving union members or labor activists as victims. Further, USTR has reported that the Prosecutor General’s Office has issued a mandate that assigned over 20 prosecutors exclusively to crimes against union members and labor activists. In addition, the National Police have assigned an additional 100 full-time judicial police investigators to support the prosecutors in investigating cases involving union members and labor activists. To implement the Labor Action Plan commitments to identify and effectively prosecute intellectual authors of labor homicides, the Prosecutor General’s Office created a context and analysis directorate tasked with investigating the patterns and context of similar cases, including labor homicides. Officials from this unit we met with during our fieldwork in Colombia described the unit as taking an integrated approach to analyzing cases across the spectrum of human rights crimes to determine common themes and perpetrators. Despite the actions Colombia has taken to reduce violence against union members and labor activists, Colombian union and NGO officials, as well as USTR officials, report that violence and impunity remain problems. Union members we met with during our fieldwork in Colombia reported the existence of a high impunity rate for violent crimes against unionists. According to DOL, of the 100 unionist murders that have occurred since 2011, Colombia’s Prosecutor General’s Office has obtained only one conviction. Union officials we met with acknowledged that the government of Colombia has taken positive steps to address violence, that the number of labor-related homicides has decreased, and that reforms have been implemented at the UNP. However, they also reported they are concerned for their safety because of their union activities. Additionally, union officials voiced the concern that risk was not being assessed at the UNP in a way that accurately captured the dangers union leaders and members face. UNP officials we met acknowledged that two union leaders who were receiving “light” protection were murdered and another union leader was murdered while he awaited the UNP’s risk assessment. In support of reducing violence in Colombia, including violence against unionists, U.S. agencies have funded multiple assistance projects. According to U.S. Agency for International Development (USAID) officials, from 2001 to 2010, the agency provided about $11.6 million in funding through its Human Rights fund to establish and support Colombia’s UNP. The UNP, which is under the Colombian Ministry of Interior, received between 10 and 18 percent of its total annual budget from USAID funding from 2001 to 2005 (ranging from about $916,000 in 2001 to about $2.5 million in 2005). According to USAID, its assistance for the UNP has steadily decreased as budget support from the government of Colombia and the effectiveness of the UNP have increased. For example, from 2011 to 2014, USAID provided approximately $150,000 per year in assistance to the UNP. In support of Colombia’s Labor Action Plan commitment to seek cooperation, advice, and technical assistance of the International Labour Organization (ILO), DOL is currently funding a 5-year (2011 to 2016), $7.82 million technical cooperation project implemented by the ILO. The objectives of this project include strengthening the institutional capacity of the Colombian government to enhance protection measures for trade union leaders, members, activists, and organizers and combating impunity for perpetrators of violence against them. Under this project, the ILO is training Colombian prosecutors, investigators, and judges on labor rights and investigating crimes against union leaders and labor activists. In addition, Colombia’s Prosecutor General’s Office officials reported that the office has received assistance from the U.S. Department of Justice on broad criminal justice reforms, including transitioning from an inquisitorial to an accusatory legal system. USTR reports that State has funded, and the Department of Justice has implemented, assistance to train investigators and prosecutors throughout Colombia in best practices for crime scene investigation, including forensic evidence handling, investigating threats, and prosecutorial management of cases for trial. The Department of Justice’s and State’s programs are aimed at broader criminal justice reform; however, Colombia’s Prosecutor General’s Office reported to us that the reforms apply to investigating and prosecuting violence against unionists. U.S. and Guatemalan government officials and ILO and union representatives reported that violence against unionists exists in Guatemala; however, the extent of the problem is unclear because disaggregated statistics on violence against unionists are not collected. Nevertheless, a union collected information on 63 cases in which union leaders or members were killed from 2007 through 2013. The unions alleged that the motive for these murders was related to the victims’ union activities and that the government of Guatemala has not done enough to investigate and resolve the cases. In addition, ILO officials stated that they have detailed extremely serious and systematic violations of the right to freedom of association in Guatemala, including murder. A 2012 ILO complaint against the Guatemalan government cited 63 murders of trade unionists and called for the establishment of an ILO Guatemalan officials confirmed that violence commission of inquiry.against trade unionists exists but reported that many of the cases cited in the ILO complaint were the result of general, extensive violence and crime in the country rather than violence directed against trade unionists. According to Ministry of the Interior officials, the ministry’s initial investigation of the 63 murder cases was unable to identify any link between the perpetrators’ motives and the victims’ union activity. Furthermore, as part of a cooperative agreement with the Guatemalan Attorney General’s Office, the International Commission against Impunity in Guatemala (CICIG) began a review of the 58 cases cited in the ILO complaint to ensure the adequacy of the investigations.DOL, as of June 2014, CICIG had completed its review of 56 cases and concluded that 6 of the cases indicated a possible link to union activity. In addition, unions reported other violent acts, such as threats, bribes, and intimidation against union members. According to union representatives we interviewed, threats of violence begin with harassing phone calls to a union leader or member. The union representative stated that if the phone calls do not get the desired results, the harassment is escalated to include physical intimidation and threats of kidnapping. If the physical intimidation fails, then kidnapping or murder occurs. To address the complaint filed by the unions, the ILO initiated the process to establish commissions of inquiry. To avoid the establishment of the commission, the government of Guatemala signed a memorandum of understanding in March of 2013, committing to take specific steps to address the issues in the complaint and establishing a plan called the Road Map that outlined the actions needed. In response, the ILO decided to postpone the establishment of the Commission of Inquiry and has been working with Guatemala to implement the Road Map. According to information that the government of Guatemala provided to the ILO, as of March of 2014, steps that the government had taken to address violence against unionists included a cooperative agreement signed with the International CICIG to support the Public Prosecutor’s Office in analyzing cases of violence against trade unionists; the transfer of 20 trained investigators to the Attorney General’s Office and the creation of the new post of assistant prosecutor in the special unit investigating crimes against trade unionists; improvement in the time periods for trial and convictions, including those for some of the 58 murders reported to the ILO; and submission by the Ministry of the Interior, in the context of analyzing assaults on trade unionists, of a draft protocol for implementing immediate and preventive security measures for trade union leaders, unionized workers, and workers from trade unions in the process of being established. Following are descriptions of some of the Guatemalan government’s ongoing efforts to address violence against trade unionists. Cooperative agreement with CICIG. According to a March 2014 ILO report, the Guatemalan Attorney General’s Office signed a cooperative agreement with CICIG on September 24, 2013. As part of the agreement, the Attorney General’s Office established a coordination mechanism with CICIG for the analysis of specific cases of violence against trade union members. CICIG agreed to support the Attorney General’s Office in the analysis of specific cases of violence against union members, providing recommendations to strengthen the investigation of those cases. Under the new coordination mechanism, the 58 murder cases cited in the ILO complaint were transferred to CICIG to continue the investigation process. In addition, the Attorney General agreed to cooperate with CICIG in the analysis of crime trends based on landmark cases of attacks and violence against union members known to the different units and offices of the Public Prosecutor’s Office. Additional trained investigators. The Ministry of the Interior and the Attorney General’s Office established interinstitutional links for cooperation to strengthen the investigations and criminal prosecution. Through this mechanism, the Ministry of the Interior transferred 20 trained investigators to the Attorney General’s Office, with the effort to provide additional support to investigate cases of violence against unionists. Timely trials and convictions. The government of Guatemala has worked to improve the length of time it takes to bring perpetrators of crimes to trial and issue convictions. In regard to the 58 murder cases cited in the ILO complaint, the Attorney General’s Office provided an update of the status of the 28 cases that have been brought to court and reported that extinction of criminal liability has been applied in 4 cases, arrest warrants have been issued in 13 cases, rulings have been handed down in 6 cases, and 5 cases are pending trial. Strengthening analysis of attacks against human rights defenders. The Attorney General’s Office restructured and strengthened the unit that analyzes attacks against human rights defenders, a category that includes unionists. As part of this process, representatives of unionized workers will meet regularly to study trends in attacks against human rights defenders and draw up recommendations for investigations by the Attorney General Office to assist both criminal investigations and the conviction of the perpetrators. An adequately staffed office will be created to carry out the relevant investigations. The government of Guatemala reported several other steps it intends to initiate or has initiated to address the issues in the ILO complaint. For example, according to the ILO’s March 2014 report, the Ministry of Labor and Social Welfare proposed amendments to the Labor Code and other relevant laws, incorporating the amendments proposed by the ILO supervisory bodies.also been proposed by the Ministry of Labor to enable the general labor inspectorate to fulfill its mandate to ensure the effective application of labor legislation. On the basis of the progress demonstrated in addressing points in the Road Map reported by the government of Guatemala, the ILO decided in March 2014 to again postpone the implementation of the Commission of Inquiry and reevaluate progress in November of 2014. The United States has provided some funding through State’s Bureau of International Narcotics and Law Enforcement (INL) to assist Guatemalan authorities to provide better protection to unionists. INL provided Guatemala $2 million dollars in assistance for fiscal years 2012 through 2016 to implement reform and capacity-building projects. The goal of the INL funding is to increase the investigative capabilities of law enforcement officers within the Ministry of the Interior. Part of the funding focuses on defenders of human rights, including addressing violence against trade unions. Assistance provided by INL has ranged from the provision of equipment to technical training and exchanges. For example, INL provided the Ministry of the Interior’s recently established security division, División de Protección de Personas y Seguridad (DPPS), with filing cabinets to organize files on union leaders for which it was providing protection. INL officials stated that DPPS lacked a tracking system and had rooms stacked with disorganized files and, as a result, could not accurately track the number of people receiving protection. According to INL officials, as of December 2013, DPPS was providing protection for about 1,000 union leaders and members. INL officials also stated that they had discussed the possibility of creating a risk analysis unit in DPPS, similar to Colombia’s UNP. In addition to being responsible for monitoring partner countries’ implementation of free trade agreement (FTA) labor provisions, U.S. agencies are responsible for monitoring the implementation of labor initiatives such as the White Paper and the Labor Action Plan, which were developed in the context of, respectively, the Dominican Republic- Central America-United States Free Trade Agreement (CAFTA-DR) and the United States-Colombia Trade Promotion Agreement (Colombia FTA). U.S. agencies are also responsible for monitoring Guatemala’s Labor Enforcement Plan, developed in response to a CAFTA-DR labor submission to the Department of Labor (DOL). To discharge its responsibility for monitoring implementation of the CAFTA-DR White Paper projects, in 2007 DOL provided a $10 million grant to the International Labour Organization’s (ILO) Verification Project. The Verification Project prepared reports every 6 months from 2005 through 2010 regarding the implementation of the White Paper projects. The Verification Project and most White Paper projects concluded in 2012. The ILO Verification Project tracked spending and implementation of every White Paper project, requiring grant recipients to submit periodic reports to ILO for the biannual report. ILO staff provided the reports to DOL, which used the reports to ensure transparency and progress of the U.S.-funded projects. Entities implementing White Paper projects included, among others, the U.S. Agency for International Development (USAID), partner countries’ ministries of labor and labor courts, and nongovernmental organizations. According to ILO staff, although the Verification Project did not measure the impact of the projects on labor conditions, the reports reflected improvements of government institutions and other entities and demonstrated the benefit of addressing some issues faced by workers. ILO officials noted that, besides providing accountability for the expenditure of funds and tracking the progress of projects, the Verification Project led to improvements in data gathering and reporting capabilities of the ministries of labor. The Office of the U.S. Trade Representative (USTR), in cooperation with DOL and the Department of State (State), is responsible for monitoring the implementation of the Colombia Labor Action Plan, most of which Colombia was required to implement by 2012. The plan listed steps to improve labor conditions, mutually agreed on by the U.S. and Colombian governments, that the Colombian government agreed to take before the President of the United States would put the FTA forward for congressional consideration. The conditions that the Labor Action Plan addressed included violence against Colombian labor union members, inadequate efforts to bring to justice perpetrators of crimes against labor union members, and insufficient protection of workers’ rights in Colombia. According to USTR staff, the plan continues to serve as a framework for cooperation with Colombia. The plan includes regular meetings between officials from both countries to monitor and implement the plan through 2013, and both governments announced an extension of these meetings through at least 2014. On April 2012, USTR announced that the Colombian government had taken important steps to fulfill the Labor Action Plan and that the FTA would go into effect. USTR officials did not provide records or documentation of these steps, stating that they did not request such records because the website for the Presidency of Colombia supplied documentation of all actions taken. On reviewing some of the documentation on the website, we found that the documents were not structured so as to readily provide information about steps that the Colombian government had taken to meet its Labor Action Plan commitments. In written responses to our questions, Colombia’s Ministry of Labor indicated that the government had met all Labor Action Plan commitments, and the ministry provided examples of the government’s actions to meet these commitments. USTR officials reported that USTR, DOL, and State had engaged extensively with the Colombian Ministry of Labor and other government institutions in Colombia to discuss and confirm Colombia’s progress in implementing each element of the Labor Action Plan. According to USTR, as of August 2014, USTR and DOL staff had made seven visits to Colombia since the FTA entered into force to address these issues. USTR fact sheets and a USTR and DOL report from April 2014 provided information about Colombia’s accomplishments under the Labor Action Plan—for example, the report detailed the number of labor inspectors hired, the number of special prosecutors and investigators hired, and the number of convictions in cases of violence against labor leaders—as well as remaining challenges. In addition, according to DOL, a DOL staff member, detailed to the U.S. embassy in Bogota from 2011 to 2012 to oversee implementation of the Action Plan, met during that period with Colombian government officials, nongovernmental organizations, union representatives, and other stakeholders to discuss the Labor Action Plan’s implementation, among other topics. USTR, in cooperation with DOL and State, is responsible for monitoring implementation of the Guatemala Enforcement Plan, agreed to by the U.S. and Guatemalan governments in April 2013, to address concerns identified in a labor submission filed under CAFTA-DR. USTR provided us with internal documents that it had used to track Guatemala’s progress in taking the steps required by the plan. For example, USTR provided a matrix tracking the status of legal instruments required under the Enforcement Plan; USTR also provided quarterly progress reports submitted by the Guatemalan Ministry of Labor, identifying steps that the ministry had taken to address commitments in the Enforcement Plan. USTR officials said that since 2009, USTR and DOL had directly engaged with Guatemala on the labor case through 17 trips, including 7 trips that involved staff at the Assistant Secretary level or higher. In addition, officials at the Guatemalan Ministry of Labor told us that they had interacted extensively with DOL and USTR, including through multiple video-teleconferences, meetings in person, and correspondence, to update them on progress in implementing the Enforcement Plan. In April 2014, USTR told us that although it recognized steps that Guatemala had taken under the Enforcement Plan, it had not seen sufficient progress to close the case. USTR officials stated that if the Guatemalan government did not take the steps delineated in the plan by the specified dates, USTR might decide to pursue arbitration. In September 2014, USTR announced that the U.S. government was pursuing dispute settlement proceedings against Guatemala because it had not met the terms of the Enforcement Plan. The following provides details about the status of the five labor submissions that the Department of Labor (DOL) has accepted since 2008. In June 2011, DOL accepted a labor submission regarding Bahrain, 50 days after receipt. The submission remains open, with consultations ongoing. The submission, received from the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO) with a statement from the General Federation of Bahrain Trade Unions, alleged that Bahrain had violated free trade agreement (FTA) labor provisions regarding the right of association, particularly nondiscrimination against trade unionists. In December 2012, DOL issued a public report that found that the government of Bahrain’s actions appeared inconsistent with its commitments under the labor chapter. Specifically, the report found that trade unionists and leaders were targeted for dismissal, and in some cases prosecuted, in part for their role in organizing and participating in a general strike and that the dismissals reflected discrimination based in part on political opinion and activities. The report also found sectarian discrimination against Shia workers. The Department of Labor’s (DOL) report outlined nine recommendations focused on legal and administrative changes to Bahraini labor law for the U.S. government to pursue during consultations with Bahrain and also recommended that the parties develop a plan of action. On May 6, 2013, more than 4 months after DOL issued its report, the U.S. government formally requested cooperative labor consultations with the government of Bahrain in a joint letter from the Acting Secretary of Labor and the Acting U.S. Trade Representative. Consultations began when an interagency delegation met with the government of Bahrain in Manama, Bahrain, on July 15 and 16, 2013. However, according to DOL, the first round of consultations did not resolve all issues DOL had identified, and consultations were ongoing as of September 2014. The Office of the U.S. Trade Representative (USTR) attempted to schedule second-round consultations but agreed to a request from Bahrain to delay them. The United States and Bahrain held a second round of consultations on June 22 and 23, 2014, according to DOL. During the consultations, the two parties used the recommendations in the DOL report as a basis for jointly developing an action plan, which includes concrete steps for the government of Bahrain to address the concerns that the report raises. USTR officials noted that the allegations raised in the submission are subject to resolution through the consultation procedures but are not subject to dispute settlement. Figure 5 shows the timeline and status of the Bahrain labor submission. In February 2012, DOL accepted a labor submission regarding the Dominican Republic, 62 days after receipt. The submission remains open and unresolved while DOL monitors the Dominican Republic’s progress in addressing concerns that the submission raises. The submission, submitted by a private individual, alleged the failure of the Dominican Republic’s government to enforce its labor laws in the sugar sector as required under the Dominican Republic-Central America-United States Free Trade Agreement (CAFTA-DR). The submission alleged nine violations, ranging from human trafficking and forced labor to retaliatory firing of workers for affiliation with, or attempts to organize, labor groups or unions. In September 2013, DOL issued a public report that found evidence of apparent and potential violations of the Dominican Republic’s labor laws pertaining to the sugar sector. DOL’s report also identified significant concerns about procedural and methodological shortcomings in the inspection process that undermine the government’s capacity to effectively identify labor violations. In addition, the report cited concerns regarding freedom of association, the right to organize, and collective bargaining. DOL’s report offered 11 recommendations to the government of the Dominican Republic to address the report’s findings and improve enforcement of Dominican labor laws in the sugar sector and stated that DOL would reassess the situation in 6 and 12 months after the report’s publication. The recommendations range from administrative changes to help the government of the Dominican Republic improve its enforcement of Dominican labor laws to outreach suggestions for better informing sugar sector workers about their labor rights. In April 2014, DOL issued its 6-month assessment, noting that the Dominican Republic’s Ministry of Labor had committed to measures that, if instituted, would begin to address some of the recommendations in DOL’s public report. DOL’s assessment also noted that the government of the Dominican Republic had not yet indicated plans or taken actions to address most of the public report’s recommendations. Figure 6 shows the timeline and status of the Dominican Republic labor submission. In June 2008, DOL accepted a submission regarding Guatemala, 50 days after receipt. The submission remains open and unresolved and is currently in arbitration. The submission, from the AFL-CIO and six Guatemalan worker organizations, alleged that Guatemala had violated its obligation under CAFTA-DR to effectively enforce its labor laws regarding freedom of association, the rights to organize and bargain collectively, and acceptable conditions of work. In January 2009, DOL issued a public report that found significant weaknesses in Guatemala’s labor law enforcement. The report outlined 10 recommendations for the government of Guatemala, including administrative and technical changes, to address the issues raised in the submission. The report also recommended that DOL reassess Guatemala’s progress in implementing the recommendations within 6 months after the publication of DOL’s report to determine whether further action is warranted. USTR and DOL engaged with the Guatemalan government in an effort to address DOL’s recommendations for systemic improvements in the enforcement of labor laws in Guatemala. In July 2010, after Guatemala’s actions proved insufficient to address the concerns raised in the report, USTR and DOL jointly requested cooperative labor consultations with Guatemala under the CAFTA-DR Labor Chapter. The consultations failed to resolve the matter, and on May 16, 2011, USTR requested a meeting of the CAFTA-DR’s free trade commission pursuant to the agreement’s Dispute Settlement Chapter. The commission met in June 2011 but did not resolve the dispute, and in August 2011, the United States requested the establishment of an arbitral panel. After the panel was constituted in November 2012, the parties agreed to suspend it to allow for additional negotiation. In April 2013, the parties reached agreement on a comprehensive 18-point Enforcement Plan with concrete benchmarks and timelines for implementation. The plan extended the arbitration panel’s suspension for 6 months, with the possibility of an additional 6- month extension based on Guatemala’s progress in implementing the plan. In October 2013, 6 months after the signing of the Enforcement Plan, USTR, in consultation with DOL, determined that Guatemala had taken sufficient steps to enact the legal instruments called for under the plan. USTR and DOL granted the 6-month extension, noting that significant work remained to ensure the full implementation of the Enforcement Plan. In April 2014, USTR, in consultation with DOL, granted Guatemala a 4- month extension to continue its implementation of the Enforcement Plan but retains the right to reactivate the arbitration panel at any point during this period. In September 2014, the United States reactivated the panel after determining that Guatemala had not met the terms of the Enforcement Plan and that concerns over the enforcement of Guatemala’s labor laws had not been resolved. Figure 7 shows the timeline and status of the Guatemala labor submission. In May 2012, DOL accepted a submission regarding Honduras, 49 days after receipt. The submission remains open and unresolved, pending publication of DOL’s report. The submission, from the AFL-CIO and 26 Honduran organizations, alleged that the government of Honduras had violated its obligation under CAFTA-DR to enforce its labor laws relating to freedom of association; the right to organize; child labor; the right to bargain collectively; and the right to acceptable conditions of work in the Honduran apparel and auto manufacturing, agriculture, and port sectors. According to DOL officials, DOL has received over 1,200 documents in Spanish and visited Honduras four times to meet with stakeholders, reflecting the breadth of the allegations and detailed information reviewed and analyzed. As of May 2014, DOL was continuing to review documentation of the allegations and, according to DOL officials, is preparing its report. Figure 8 shows the timeline and status of the Honduras labor submission. In July 2011, DOL accepted a submission regarding Peru, 202 days after receipt, which DOL closed as resolved in August 2012. The submission, from the Peruvian National Union of Tax Administration Workers (SINAUT), alleged that Peru’s National Superintendency of Tax Administration (SUNAT), an executive branch agency of the Peruvian government that oversees both customs and tax administration, had failed to comply with Peru’s labor laws as they relate to collective bargaining, in violation of the United States-Peru Trade Promotion Agreement. In August 2012, DOL issued a public report finding that although the Ministry of Labor and Promotion of Employment appeared to have fulfilled its duties during the collective bargaining processes at issue, SUNAT had failed to comply with certain elements of the Peruvian Collective Bargaining Law, including deadlines for launching negotiations. Further, with regard to all other issues raised in the submission, DOL determined that important legal ambiguity during the period at issue prevented a finding that SUNAT had failed to comply with the law or that the government of Peru had failed to comply with, or enforce, its own labor laws during that time. DOL’s report did not recommend formal consultations between the U.S. government and the Peruvian government. According to DOL, because the government of Peru had taken important steps to address some of the issues raised in the submission, including issuing legal instruments to help clarify legal ambiguity and facilitate collective bargaining, DOL did not believe that formal consultations were needed to continue positive engagement and progress on these matters. As a result, DOL closed the submission as resolved in August 2012. In September 2011, according to DOL, while it was reviewing the submission, the government of Peru issued two executive orders that clarified the parties’ collective bargaining duties in this and similar cases. DOL’s report on the submission explained that on March 29, 2012, after applying the recent executive orders, an arbitral panel reached a decision favorable to SINAUT that contained both economic and noneconomic awards for the union. Moreover, DOL reported that the government of Peru appealed the ruling, contending that the ruling reflects a misapplication and misinterpretation of the Public Sector Budget Law. According to the DOL report, on appeal, the Peruvian labor courts overturned the economic elements of the arbitral award, based on a ruling that those elements conflicted with Peru’s Public Sector Budget Law, but sustained the noneconomic elements of the award. Figure 9 shoes the timeline and status of the Peru labor submission. The following are GAO’s comments regarding the Office of the U.S. Trade Representative’s (USTR) letter dated September 16, 2014. In its letter, USTR questions our evaluation of its monitoring and enforcement of free trade agreement (FTA) partners’ compliance with their labor commitments and expresses concern that we do not give sufficient credit to U.S. agencies for their multifaceted work to help FTA partners comply with their labor commitments. We acknowledge in our report that the agencies have given, overall, greater political and organizational priority to monitoring and enforcement of FTA labor commitments since we last reported on this topic in 2009. The evidence that USTR and the Department of Labor (DOL) provided demonstrate that their monitoring and enforcement activities have sometimes led to a strengthening of FTA partners labor laws and practices that might not have occurred absent these activities. Nevertheless, we stand by our current assessment that USTR and DOL have not demonstrated a proactive and systematic approach to monitoring and enforcing FTA partners’ compliance with their FTA labor commitments, with the exception of a few partner countries. On the contrary, the evidence that the agencies provided shows that USTR’s and DOL’s approaches and actions are generally ad hoc and leave important gaps. Our review focuses on steps that USTR and DOL have taken, with support from the Department of State (State), to monitor and enforce the labor provisions in the FTAs. In reviewing USTR’s and DOL’s monitoring and enforcement of the provisions in the FTA labor chapters, we looked for evidence that USTR and DOL took steps to ensure that each FTA partner had the requisite laws in place by the time the President determined that the FTA could enter into force, as well as evidence that USTR and DOL have taken steps to ensure that the partners are enforcing their laws and maintaining labor rights in law and practice since the FTAs entered into force. Further, because USTR announced in 2009 that it would not wait for complaints (e.g., labor submissions) to investigate and address possible inconsistencies with FTA provisions, we looked for evidence that the agencies had established means or mechanisms to anticipate, analyze, and resolve problems in the absence of a submission. We consider the approach to monitoring and enforcement that USTR outlines in its letter to be broadly compatible with the definitions of monitoring and enforcement and with the key elements of monitoring and enforcement that we have identified in our report. That is, enforcement involves taking action to secure compliance by the partner, and monitoring and enforcement typically involve six key steps: (1) gathering and analyzing information, (2) setting priorities, (3) identifying compliance problems, (4) developing and implementing responses, (5) taking enforcement actions, and (6) coordinating with agencies. Following are specific comments in response to USTR’s letter. Comment 1: Throughout our year-long review process, we requested documentation of USTR’s monitoring and enforcement of labor provisions. However, USTR provided a large number of documents very late in the process and provided most documents only after receiving our draft report. We thoroughly reviewed and considered all documents that USTR provided and made changes in our report as appropriate to ensure its accuracy and completeness. Several of these changes are described in the comments that follow. Comment 2: The evidence that we obtained from USTR and DOL does not support USTR’s assertion that the two agencies actively and systematically monitored and enforced labor provisions in all 20 FTA countries, implementing all of the key elements that we identified, regardless of whether the country had been the subject of a submission. USTR and DOL submitted evidence that they had taken some of these key steps in several FTA partner countries, three of which—Colombia, Jordan, and Panama—were not the subject of labor submissions. In addition, to support its technical comments about a draft of our report, USTR submitted further evidence of certain proactive steps, which we have incorporated in our report as appropriate. However, in the absence of evidence that USTR and DOL have taken steps to analyze and address possible inconsistencies with FTA labor provisions in most partner countries that have not been the subject of a labor submission, we could not conclude that the agencies had implemented a systematic approach to monitoring the implementation of all labor provisions in all FTAs. Moreover, according to DOL—on which USTR relies for day-to-day monitoring of the FTA partners—the division’s staff spent 80 percent of their work hours in the past year monitoring implementation of the Guatemala Enforcement Plan, addressing submissions for five other countries, and following up on activities initiated under the Colombia Labor Action Plan. This left the employees’ remaining work hours available to monitor and engage the other 14 FTA partner countries. With the exception of Colombia, stakeholders in countries we visited indicated that their recent interactions with USTR and DOL had been very limited. Stakeholders in these countries also told us that the FTA labor affairs councils in their countries had been largely inactive and that the councils’ meetings were considered a formality. Comment 3: Having reviewed all of the cables that State provided, we consider them a useful tool for DOL and State to obtain information about labor conditions in partner countries. Our report acknowledges that USTR is an addressee on such cables and is generally well informed about labor conditions in partner countries as a result of this and other input. Obtaining and analyzing information from credible sources such as State is integral to completing the first step of the monitoring and enforcement process. Comment 4: USTR states that it deploys a range of tools to try to resolve concerns, whether they were identified formally or informally. However, we found that USTR’s and DOL’s approach to addressing concerns that were identified formally, such as by a submission, is more systematic than their approach to addressing concerns that they have identified informally. For example, in responding to a labor submission, DOL conducts a formal review and issues an official report, which generally identifies any possible inconsistencies with labor provisions in the FTA and contains specific recommendations to resolve the issue. DOL and USTR typically use the report’s findings to further engage with the FTA partner country and develop an action plan, such as in the case of Guatemala. When they have identified a concern informally, USTR and DOL may engage with the partner country to discuss the concern, usually in the context of an established FTA mechanism such as the FTA labor affairs council or free trade commission. However, we found that USTR and DOL use these mechanisms infrequently, and USTR and DOL officials told us that except for a few high-priority countries such as Colombia, the agencies do not typically perform in-depth analysis of a partner country’s compliance with its FTA labor commitments unless the country has been the subject of a submission. Comment 5: We acknowledge that USTR’s work with Colombia did not end with the ratification of the FTA. We added text to our report to indicate our concurrence with this point and to illustrate USTR’s postratification monitoring and enforcement activities. Comment 6: We acknowledge that USTR and DOL have engaged extensively with the government of Colombia to monitor the implementation of the Labor Action Plan, in the process addressing several of the key elements of monitoring and enforcement FTAs. However, we cannot conclude that a systematic approach to monitoring and enforcement labor provisions is in place, because we did not see evidence of certain key elements—for example, an approach to respond to outstanding issues identified since the implementation of the Labor Action Plan. USTR also indicates that its level of engagement to monitor and enforce labor provisions in Colombia is not unique and that USTR and DOL have engaged with other FTA countries to a similar extent as with Colombia. However, we did not see evidence of such a level of engagement with other countries that have not been the subject of a submission. In fact, DOL reported that in the last year, its monitoring division staff spent 80 percent of their time following up on the Colombia Labor Action Plan activities and addressing submissions. Comment 7: We do not assume or indicate that a tailored, country-specific approach to labor monitoring and enforcement is inappropriate, and we do not exclude such approaches from our evaluation. We reviewed the documentation that USTR and DOL provided, looking for evidence that key elements of monitoring and enforcement have been implemented. We recognize that USTR and DOL have successfully implemented certain steps, such as gathering information and identifying compliance problems, using different methods in different countries. However, we did not see evidence of an overarching strategy that assures that priorities and problems are systematically addressed. Comment 8: We evaluated all evidence that USTR and DOL provided, including documentation that USTR provided with its technical comments, and we made changes to our report. Specifically, we incorporated language acknowledging that in some partner countries, the agencies took steps before and after FTA ratification that were consistent with key elements of monitoring and enforcement and that likely contributed to assuring that the partners met certain labor commitments. Comment 9: We acknowledge that the labor submission process is a central component of USTR’s and DOL’s approach to addressing possible inconsistencies with FTA labor provisions in partner countries and that the agencies have done extensive work to investigate allegations in the submissions that DOL has accepted. However, in our view, reliance on labor submissions to assess compliance and take enforcement actions is inconsistent with USTR’s 2009 commitment to no longer enforce FTA partners’ labor commitments “only on a complaint-driven basis” but instead to “immediately identify and investigate labor violations.” Comment 10: As stated, the report acknowledges that labor submissions are a key component of USTR’s and DOL’s monitoring and enforcement of FTA labor provisions and that the agencies undertake considerable research and analysis in the process of addressing submissions We modified the text of the report to clarify that after DOL receives a submission, it works with USTR and State to engage diplomatically to address concerns, as well as independently to investigate and analyze the issues. Comment 11: We acknowledge that USTR and DOL work together to address labor concerns identified in submissions and to engage with partner countries regarding labor matters. Our review of evidence obtained from the agencies and in FTA partner countries generally confirmed that USTR and DOL also coordinate on an ad hoc basis. However, the evidence that we reviewed did not show that the agencies have developed a coordinated, strategic approach to systematically address the key steps of monitoring and enforcing labor provisions in all FTAs and to address labor conditions that may be inconsistent with FTA provisions, such as the conditions identified in DOL’s management reports. For example, agendas that USTR provided for two meetings of the interagency Subcommittee on Labor Monitoring and Enforcement list meeting topics but do not detail expected actions or outcomes; USTR provided no record, beyond a general description, of what transpired at the meetings or of any intra- agency correspondence following the meetings. Further, during our interviews with USTR officials, the officials discussed items that the interagency process had not produced, such as up-to-date assessments of risk; agreed-on priorities; and formal action plans for partners other than Colombia, Guatemala, and Jordan. For example, USTR officials indicated that a previous interagency effort to develop a comprehensive risk-based approach had been overtaken by events and not revised. DOL officials indicated that in general, there had been no discussion of cross-agency resource use However, our work on best practices in collaboration has shown that agencies can enhance and sustain their collaborative efforts by engaging in the following eight practices: (1) define and articulate a common outcome; (2) establish mutually reinforcing or joint strategies; (3) identify and address needs by leveraging resources, (4) agree on roles and responsibilities; (5) establish compatible policies, procedures, and other means to operate across agency boundaries; (6) develop mechanisms to monitor, evaluate, and report on results; (7) reinforce agency accountability for collaborative efforts through agency plans and reports; and (8) reinforce individual accountability for collaborative efforts through performance management systems. Comment 12: Our report does not state that USTR should explicitly accuse partner countries of trade violations in its annual report to Congress. Our report states that, reflecting in part USTR’s and DOL’s limited monitoring and enforcement of these provisions, USTR’s annual report generally does not detail concerns about the implementation of FTA labor provisions by partner countries that have not been the subject of labor submissions. Our report also states that appropriate information resulting from more extensive monitoring and enforcement could help inform Congress and other U.S. stakeholders about the extent to which trade partners are fulfilling their FTA labor commitments. In addition to the contact named above, Kim Frankena (Assistant Director), Francisco Enriquez (Analyst-in-Charge), Juan P. Avila, Nicholas Jepson, Jill Lacey, Reid Lowe, Grace Lui, and Oziel Trevino made major contributions to this report.
The United States has signed 14 FTAs, liberalizing U.S. trade with 20 countries. These FTAs include provisions regarding fundamental labor rights in the partner countries. USTR and DOL, supported by State, are responsible for monitoring and assisting FTA partners' implementation of these provisions. GAO was asked to assess the status of implementation of FTA labor provisions in partner countries. GAO examined (1) steps that selected partner countries have taken, and U.S. assistance they have received, to implement these provisions and other labor initiatives and the reported results of such steps; (2) submissions regarding possible violations of FTA labor provisions that DOL has accepted and any problems related to the submission process; and (3) the extent to which U.S. agencies monitor and enforce implementation of FTA labor provisions and report results to Congress. GAO selected CAFTA-DR and the FTAs with Colombia, Oman, and Peru as representative of the range of FTAs with labor provisions, among other reasons. GAO reviewed documentation related to each FTA and interviewed U.S., partner government, and other officials in five of the partner countries. Partner countries of free trade agreements (FTA) that GAO selected—the Dominican Republic-Central America-United States Free Trade Agreement (CAFTA-DR) and the FTAs with Colombia, Oman, and Peru—have taken steps to implement labor provisions and other initiatives to strengthen labor rights. For example, U.S. and foreign officials said that El Salvador and Guatemala—both partners to CAFTA-DR—as well as Colombia, Oman, and Peru have acted to change labor laws, and Colombia and Guatemala have acted to address violence against union members. Since 2001, U.S. agencies have provided $275 million in labor-related technical assistance and capacity-building activities for FTA partners, including $222 million for the four FTAs GAO reviewed. However, U.S. agencies reported, and GAO found, persistent challenges to labor rights, such as limited enforcement capacity, the use of subcontracting to avoid direct employment, and, in Colombia and Guatemala, violence against union leaders. Since 2008, the Department of Labor (DOL) has accepted five formal complaints—known as submissions—about possible violations of FTA labor provisions and has resolved one, regarding Peru (see fig.). However, for each submission, DOL has exceeded by an average of almost 9 months its 6-month time frame for investigating FTA-related labor submissions and issuing public reports, showing the time frame to be unrealistic. Also, union representatives and other stakeholders GAO interviewed in partner countries often did not understand the submission process, possibly limiting the number of submissions filed. Further, stakeholders expressed concerns that delays in resolving the submissions, resulting in part from DOL's exceeding its review time frames, may have contributed to the persistence of conditions that affect workers and are allegedly inconsistent with the FTAs. Five Labor Submissions Accepted by DOL Regarding Free Trade Agreements In 2009, GAO found weaknesses in the Office of the U.S. Trade Representative's (USTR) and DOL's monitoring and enforcement of FTA labor provisions. In the same year, the agencies pledged to adopt a more proactive, interagency approach. GAO's current review found that although the agencies have taken several steps since 2009 to strengthen their monitoring and enforcement of FTA labor provisions, they lack a strategic approach to systematically assess whether partner countries' conditions and practices are inconsistent with labor provisions in the FTAs. Despite some proactive steps, they generally rely on labor submissions to begin identifying, investigating, and initiating steps to address possible inconsistencies with FTA labor provisions. According to agency officials, resource limitations have prevented more proactive monitoring of all FTA labor provisions. As a result, USTR and DOL systematically monitor and enforce compliance with FTA labor provisions for only a few priority countries. USTR's annual report to Congress about trade agreement programs provides limited details of the results of the agencies' monitoring and enforcement of compliance with FTA labor provisions. DOL should reevaluate its submission review time frame and better inform stakeholders about the submission process. USTR and DOL should establish a coordinated strategic approach to monitoring and enforcement labor provisions. USTR's annual report to Congress should include more information of USTR's and DOL's monitoring and enforcement efforts. The agencies generally agreed with the recommendations but disagreed with some findings, including the finding that they lack a systematic approach to monitor and enforce labor provisions in all FTAs. GAO stands by its findings.
The Health Insurance Portability and Accountability Act of 1996 amended the Social Security Act (the act) to, among other things, (1) establish a Health Care Fraud and Abuse Control Program and (2) establish an expenditure account, designated as the Health Care Fraud and Abuse Control Account (Account) within the Federal Hospital Insurance Trust Fund (Trust Fund). The Account is administered by Department of Health and Human Services’ (HHS) Office of Inspector General and DOJ. The amendment also makes appropriations for the Account from the general fund of the U.S. Treasury. The appropriations are in specified amounts for each fiscal year beginning with fiscal year 1997 for transfer to FBI to carry out its health care fraud investigations. In 1997 HHS and FBI entered into an interagency agreement to facilitate the required annual transfer of funds from the Account to FBI solely for its health care fraud investigations. FBI receives the transferred funds and records the funds in its Salaries and Expenses (S&E) appropriation account at the beginning of each fiscal year. FBI then incurs obligations for health care fraud investigations and makes payments from the S&E account, which is also used to make payments for other FBI mission-related and support activities. The amounts that were transferred to FBI as required by the act for the years of our review were as follows: fiscal year 2000: $76 million, fiscal year 2001: $88 million, fiscal year 2002: $101 million, and fiscal year 2003: $114 million (and each subsequent year). The amendment requires that these funds be used solely to cover the costs (including equipment, salaries and benefits, and travel and training) of the administration and operation of the health care fraud and abuse control program, including the costs of prosecuting health care matters, investigations, financial and performance audits of health care programs, and inspections and other evaluations. These health care fraud investigations are managed nationally by FBI’s Health Care Fraud Unit (HCFU), which was created in 1992 within the Financial Crimes Section of the FBI’s Criminal Investigative Division. Health care fraud investigations include those for fraud against government programs and private insurance, as well as medical privacy law violations. HCFU is responsible for health care fraud investigations that are conducted by FBI’s field offices and, for management and reporting purposes, both HCFU and the related field investigations are considered a part of its White Collar Crime decision unit. FBI used a limited approach to monitoring its use of HIPAA transfers, which might have been sufficient when it clearly used more agent FTEs for health care fraud investigations than it had budgeted. But this approach was insufficient when some of the agent FTEs previously devoted to health care fraud investigations were shifted to counterterrorism activities, causing actual FTEs to fall below budgeted FTEs. At the time of our review, FBI’s Financial Management System (FMS) was unable to track overall costs related to health care fraud investigations. As a result, FBI had minimal assurance that all of the transferred HIPAA funds were properly spent. DOJ is currently planning to implement the new DOJ-wide Unified Financial Management System, but it has yet to develop the specific systems requirements to enable it to accurately capture all of the costs of its health care fraud investigations and therefore to help monitor compliance with HIPAA and other relevant laws and regulations. FBI’s budget for health care fraud investigations was equivalent to the amount of the HIPAA transfers and included only the direct program costs. These direct costs consisted of payroll and benefits for agents and certain other personnel involved in health care fraud investigations, plus related costs such as rent and supplies. FBI used an approach to monitoring the use of HIPAA transfers that considered agent FTEs only without considering the other direct program costs. For the 4 years we reviewed, the agent FTEs represented about 42 percent of the budgeted amounts for work on health care fraud investigations, while the other direct program costs represented about 58 percent of the budgeted amounts. FBI obtained agent FTEs from reports generated by its time utilization system, which records the percentage of time that agents worked on each investigative classification and, if applicable, major cases. FBI officials told us that prior to September 11, 2001, reported agent FTEs charged to the health care fraud investigations were historically far in excess of those budgeted and they were satisfied that the resources expended on health care fraud investigations exceeded the HIPPA transfers. For example, in fiscal year 2000, FBI reported that health care fraud agent FTEs exceeded the number budgeted that year by about 19 percent. However, this limited approach to monitoring the use of the HIPAA transfers was insufficient when agent FTEs were shifted to counterterrorism activities after September 11, 2001, because it lacked the specificity of cost information for both direct and indirect costs. Furthermore, FBI’s FMS was not capable of providing this specific cost information. Therefore, in years when reported agent FTEs were close to or below budgeted FTE amounts, FBI had no effective mechanism in place to monitor compliance with HIPAA. This was the case when reported agent FTEs approximated the budgeted amounts in fiscal year 2001 and fell below budgeted FTEs in fiscal years 2002 and 2003 by 31 percent and 26 percent, respectively. Reliable information on the costs of federal programs and activities is crucial for the effective management of government operations and assists internal and external users in assessing the budget integrity, operating performance, stewardship, and systems and control of program activities. In this regard, the Chief Financial Officers Act of 1990 expressly calls on agencies to provide for the systematic measurement of performance and the development of cost information. In addition, “Statement of Federal Financial Accounting Standards Number 4” established for all federal agencies cost accounting concepts and standards aimed at providing reliable and timely information on the full cost of federal programs, their activities, and outputs. Cost information for program activities is especially crucial in order to properly manage and account for funds that have been appropriated, or in this case, transferred for certain authorized purposes. FBI’s FMS has minimal capability to track health care fraud investigation and other specific program costs necessary to meet federal guidance. FMS tracks costs by cost center, of which only headquarters costs are separately identifiable by program. However, substantially all of the direct costs for health care fraud investigations are incurred at the individual field offices, each of which is considered separate cost centers. The specific program costs within each field location, therefore, are not individually tracked or separately identifiable. As a result, FBI cannot use FMS to track and report all health care fraud investigation costs. DOJ is currently in the process of implementing a new financial management system. The new UFMS is likely to have the capability to capture nonpersonnel costs on a program or subprogram basis, if design specifications are set to do so. However, according to FBI finance officials, specifications have not been set up for UFMS to capture total payroll costs at the program level. These costs amounted to about 55 percent of the budgeted health care fraud investigation costs. Under UFMS as currently planned, payroll costs are to be processed by the U.S. Department of Agriculture’s National Finance Center (NFC) starting in 2006 and recorded at the summary level. As a result, they will be combined with the payroll costs of other programs and will not be separately identifiable. NFC officials told us that they recommend that all customers receive summary and detailed-level data. NFC uses a 27-digit accounting code, of which 24 digits are available to agencies to establish an account structure for detailed payroll information such as program, subprogram, and job code levels. NFC officials also stated that they met informally with FBI personnel and explained how the accounting codes could be used to support FBI’s needs. An FBI official told us that because of security concerns, FBI would have to give consideration to using such codes. The absence of accounting codes for programs and subprograms or other control mechanisms for monitoring payroll and other costs will continue to impede FBI’s ability to assess compliance with HIPAA and other relevant laws and regulations. In the absence of system-generated program costs and upon our request for cost schedules, FBI began developing an estimate of its health care fraud investigation costs incurred for fiscal years 2000 through 2003 in an attempt to determine the propriety of its use of the HIPAA transfers. FBI engaged in extensive manual efforts and developed cost estimates that, in their final form, appropriately considered both direct program costs and the portion of indirect, FBI-wide support unit costs that related to the health care fraud program. However, we found that the estimates were either directly or indirectly based on data from FBI’s time utilization system, which had not been properly validated, and various other data that were not adequately supported. As a result, despite a good-faith effort by FBI to estimate these costs, neither we nor FBI could reliably determine whether the HIPAA transfers were spent solely on health care fraud investigations for the 4-year period. FBI officials told us at the start of our review that they were unable to provide us with a report of actual costs and had not previously estimated the costs associated with the transfers for fiscal years 2000 through 2003 primarily because of the financial systems’ weaknesses that we previously discussed. As an alternative, they proposed to provide us with an estimate of health care fraud investigation costs and subsequently developed an estimation methodology. Because cost estimates for this program had never before been attempted, FBI tried different approaches and revised the estimates several times during our review. The first three estimates for the 4-year period were revised to reflect slight changes in the methodology and to correct an error in the estimates but were quite similar to each other in method and in results. In essence, FBI estimated the direct costs for each budgeted line item of the health care fraud program. These were categorized into four groups: (1) headquarters payroll and benefits; (2) agent payroll and benefits; (3) other field personnel’s payroll and benefits; and (4) related nonpersonnel costs such as utilities, equipment, and supplies. The method for estimating each group is as follows: Headquarters payroll and benefits, as previously discussed, were predominantly tracked in FMS. The agent payroll and benefits costs, accounting for approximately 46 percent of FBI’s estimated program costs for the 4 years, were estimated directly on the basis of agent FTEs reported in FBI’s Time Utilization and Record Keeping (TURK) system and average FBI-wide salaries and benefits for General Schedule 10-13 field agents. The other field personnel’s payroll and benefits, which represented about 29 percent of FBI’s estimated program costs for the 4 years, were estimated from a combination of sources, including investigative support staff FTEs from its TURK system, that were summarized in two manual spreadsheets. The spreadsheets were prepared by two staff members, one of whom no longer works at FBI. The methodology for this staff member’s spreadsheet is uncertain, and the results could not be verified. FBI’s related nonpersonnel cost estimates were generally proportionate to the budgeted amount on the basis of a ratio of budgeted FTEs versus the FTEs reported in the TURK system. For example, for fiscal year 2003, the nonpersonnel costs line-item amounts were estimated at about 79 percent of the budgeted amount, since only 664 total FTEs were reportedly charged to health care fraud investigations, while a total of 844 FTEs were budgeted. However, the budgeted and estimated amounts for these costs were not subsequently compared with actual costs for any of the years presented in order to verify the reasonableness of the amounts. We determined that the primary source data used either directly or indirectly to estimate the health care fraud costs, as reported from the TURK system, had not been properly tested to determine the reliance that could be placed on the information. Prior to fiscal year 2002, the work time percentages and related investigative classification information in the TURK system, which has been operational in its current form since 1991 and is used by FBI for a variety of budgetary and program management decision making, had never been properly validated. We found that for fiscal years 2002 and 2003, FBI conducted limited internal testing, including tests on whether the work hours recorded in the system were correctly charged to the appropriate investigative case. The tests were performed at all of the field office locations and produced error rates that varied from year to year and across field locations but were not conducted on statistically valid samples. Therefore, the results cannot be applied to the population beyond the specific items tested. Nonetheless, the identified errors raise questions about the reliability of the data in TURK and demonstrate the need for additional data validation work by FBI. In addition, a key financial official indicated that at least one of the tests might not have been properly designed to validate the data. In addition to the lack of validation of the TURK data, certain other supporting documentation could not be verified or was not adequate. Examples include the following: The health care fraud equipment account, approximately 5 percent of budgeted amounts for the 4 years, funded purchases other than equipment such as travel and training expenses and was used much like a discretionary account. FBI officials told us that at year’s end, these nonequipment costs were adjusted by moving them out of the equipment account and into the appropriate line item on the basis of the amounts recorded in a detailed listing of purchases prepared by HCFU. We were unable to reconcile the amount of purchases recorded in the detailed equipment listings or the amount of interaccount adjustments to the FBI cost estimates, and no such reconciliation was provided by FBI. The amount of interaccount adjustments ranged from $424,000 to $7.5 million a year for the 4 years under review. FBI used average salaries of support personnel at year’s end that might not have accurately represented the mix of salaries of the staff supporting health care fraud investigations whose duties ranged from administrative to professional (e.g., medical experts). FBI surveyed field office managers in an effort to capture other FTEs that were related to health care fraud investigations. Field managers estimated the portion of hours or FTEs spent investigating health care fraud cases that were recorded in TURK under other investigative classifications. For example, a case dealing with an Internet pharmacy that was investigated by a Cyber Crimes squad could be considered a health care fraud investigation and included in the estimate of health care fraud costs. These other agent FTEs are difficult to verify and, in some cases, were reported from memory. On average, the first three FBI cost estimates showed that FBI spent $33 million more on health care fraud investigations than the amount of the HIPAA transfers for fiscal years 2000 and 2001 and about $29 million less than the HIPAA transfers for fiscal years 2002 and 2003. When we were provided with these cost estimates, FBI officials stated that this shift in resources away from health care fraud investigations in the latter 2 years was a result of the increase in the counterterrorism investigative activity after the September 11 attacks. FBI management prepared a fourth and final cost estimate that included an allocation of the additional costs of other FBI units, such as forensic laboratory services and mandatory training that support various FBI programs, including the health care fraud program. While it is generally appropriate to include such indirect costs when determining total program costs, these additional costs were not previously considered when budgeting the funds transferred by the Congress to FBI for its health care fraud investigations. Furthermore, these additional items were not included in DOJ’s October 2003 response to the Senate Finance Committee regarding FBI’s use of the HIPAA transfers. FBI estimated the portion of costs for each of the six support units— training, forensics, information management, technical field support, criminal justice services, and management and administration—that related to health care fraud investigations and added them to the estimates of direct costs previously provided to us in the third version. While FBI put forth a good-faith effort to devise a way to allocate the indirect support unit costs to health care fraud investigations, its methodology relied, in part, on layers of unvalidated data. For example, FBI allocated FBI-wide support unit costs as reported in its audited financial statements first to the DOJ Strategic Goal that included health care fraud investigations. The percentages used in this calculation were the same used to allocate costs for FBI’s Statement of Net Cost; however, FBI’s auditors said they did not validate the methodology or documentation supporting the allocation percentages. After FBI allocated the specific support unit costs to the DOJ Strategic Goals, FBI allocated those costs to health care fraud investigations using FTE data based primarily on TURK, which, as previously discussed, has not been validated. On average, the additional indirect costs represented approximately $34 million, or 27 percent of total health care fraud costs per year. With the addition of these indirect costs, FBI ultimately estimated that it spent more on health care fraud investigations than was funded by the HIPAA transfers for all 4 years. However, with the magnitude of unverified and inadequately supported data, neither we nor FBI could reliably determine whether the HIPAA transfers were spent solely on health care fraud investigations for the 4-year period. FBI’s monitoring approach for determining the proper use of HIPAA funding was limited and did not provide the level of assurance needed when agent FTEs devoted to health care fraud investigations were close to or below budgeted amounts. Absent a financial management system that could capture the costs of its health care fraud investigations, FBI had to resort to extensive manual efforts to estimate the costs but did not have the data needed to do so reliably. Until FBI improves its data reliability and either develops a financial management system capable of tracking and reporting health care fraud investigation cost information or some other effective monitoring approach, it will continue to lack sufficient accountability over the use of the HIPAA transfers. Inadequate accountability hinders efforts to budget, manage, and account for program funds appropriately and will leave FBI at an increased risk of violating HIPAA and other laws. We are making four total recommendations—two enhancing FBI’s accountability over the HIPAA transfers and the costs related to health care fraud investigations in the short term and two augmenting the new Unified Financial Management System’s cost-tracking capabilities in the long term. We recommend that the Director of the FBI take the following actions: Develop formal, interim policies and procedures for reporting health care fraud investigation costs that specify (1) the costs to be estimated and/or allocated, (2) the supporting documentation to be maintained, and (3) the method to validate those data used. Periodically conduct statistically valid testing of the data in the Time Utilization and Record Keeping system, in particular, the work time percentages and related investigative classification information to ensure the TURK system’s reliability. In addition, require field office managers to follow up on any issues identified in the testing. Either specify that the UFMS have the capability to allocate payroll costs provided by the NFC payroll system to specific programs or develop cost accounting codes at the program and subprogram levels to enable NFC to provide the necessary detailed payroll reports. Ensure that the UFMS has the capability and design specifications to track nonpersonnel costs related to health care fraud investigations. In a joint letter with written comments on a draft of this report (reprinted in appendix II), DOJ and FBI agreed with the four recommendations in this report and said they have begun to address the two short-term recommendations. Specifically, FBI is expecting to complete reviews of its procedures used to track health care fraud investigation costs and the collection and validation procedures for data entered into the TURK system by May 31, 2005. Concerning the long-term solution through financial management system enhancements, FBI acknowledged the need to establish control mechanisms to monitor both personnel and nonpersonnel costs related to health care fraud investigations to ensure the transparent allocation of FBI resources while maintaining appropriate levels of security. FBI notes that health care fraud investigations funded by HIPAA have contributed to a number of significant, high-profile case accomplishments as a result of the FBI’s dedication of HIPAA resources. FBI and DOJ officials provided oral comments on technical matters, which we have incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Ranking Minority Member, Senate Committee on Finance; the Attorney General of the United States; the Director, FBI; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you or your staff have any questions on the matters discussed in this report, please contact me on (202) 512-9508 or by e-mail at calboml@gao.gov or contact Steven R. Haughton, Assistant Director, at (202) 512-5999 or haughtons@gao.gov. Major contributors to this report are included in appendix III. We reviewed the internal controls related to the use of the Health Insurance Portability and Accountability Act (HIPAA) transfers and the cost estimates of health care fraud investigations for fiscal years 2000 through 2003. We requested available documentation of the policies, procedures, and guidelines relating to the HIPAA transfers and the health care fraud program. We conducted interviews with Federal Bureau of Investigation (FBI) officials to obtain an understanding of the internal controls, including fund controls, in place over transferred funds. We reviewed the sufficiency of those internal controls in light of GAO’s Standards for Internal Control in the Federal Government. Because of the inability of FBI’s existing Financial Management System to produce program-level cost information as described in this report, FBI developed an estimate of the costs of health care fraud investigations. FBI provided us with a schedule of cost estimates for its health care fraud investigations for each of the 4 fiscal years, a description of the cost-estimation methodology, and various documentation that supported some of the headquarters and field office costs. During the course of our work, FBI modified its original schedule of cost estimates twice because of slight changes to the cost-estimation methodology. For all three schedules of costs, we evaluated the overall method of cost estimation to determine the method’s reasonableness and ability to assure that applicable laws and regulations were followed. Additionally, we (1) identified the sources of information used in the methodology; (2) verified, where possible, the underlying data flow and formulas; and (3) compared the cost estimates with the amount of funds transferred to FBI for each of the 4 years. FBI provided us with a fourth and final schedule of cost estimates on January 5, 2005, which included allocations of FBI-wide administrative and support costs not originally considered part of health care fraud costs. We requested supporting documentation for these additional allocations and reviewed the limited documentation that was available. FBI’s cost-estimation methodology was based significantly on information derived from the Time Utilization and Record Keeping system. We were unable to rely on this system’s data for the purpose of our review because the data for fiscal years 2000 and 2001 had not been validated and a limited internal review reported varying error rates for the data for fiscal years 2002 and 2003. The internal review, however, was not statistically valid. We did not attempt to independently validate the system. We performed our work from February 2004 through January 2005 in accordance with generally accepted government auditing standards. We requested written comments on this report from the Director of the FBI or his designee. A joint letter with comments from DOJ and FBI was received and is reprinted in appendix II. In addition to those named above, Sharon O. Byrd, Richard T. Cambosos, Tyshawn A. Davis, Lori B. Ryza, and Ruth S. Walk made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) provided, among other things, funding by transfer to the Federal Bureau of Investigation (FBI) to carry out specific purposes of the Health Care Fraud and Abuse Control Program. Congress expressed concern about a shift in FBI resources from health care fraud investigations to counterterrorism activities after September 11, 2001. Congress asked GAO to review FBI's accountability for the funds transferred under HIPAA for fiscal years 2000 through 2003. GAO determined (1) whether FBI had an adequate approach for ensuring the proper use of the HIPAA transfers and (2) the extent to which FBI had expended these transferred funds on health care fraud investigations in fiscal years 2000 through 2003. FBI used a limited approach to monitoring its use of HIPAA transfers, which might have been sufficient during times when it clearly used more agent full time equivalents (FTEs) for health care fraud investigations than budgeted but was insufficient when some of the agent FTEs previously devoted to health care fraud investigations were shifted to counterterrorism activities. FBI's budgeted FTEs (agent and other personnel) and related costs (such as rent and utilities) were equivalent to the amount of the HIPAA transfers. However, FBI's approach to monitoring the use of HIPAA transfers considered only agent FTEs, which made up about 42 percent of the budgeted health care fraud costs, but did not consider other personnel FTEs or related costs. According to FBI officials, they did not monitor these other budgeted amounts to determine compliance with HIPAA because the actual agent FTEs were historically far in excess of those budgeted. However, once FBI began to shift agent resources away from health care fraud investigations, agent FTEs charged to health care fraud investigations fell below the budgeted amounts, and FBI could no longer rely on this limited approach to ensure that the transferred HIPAA funds were properly used. Furthermore, FBI did not have a system in place to capture its overall health care fraud investigation costs, and therefore, was not in a position to determine whether or not all transferred HIPAA funds were properly expended. In response to GAO's review, FBI engaged in extensive manual efforts to develop cost estimates related to health care fraud investigations for fiscal years 2000 through 2003. The final estimate provided to GAO showed that FBI spent more on health care fraud investigations than was funded by transfers for each of the 4 years. However, GAO found that, overall, FBI's estimates of its health care fraud investigation costs were based on data that had not been or could not be fully validated. Therefore, even though FBI made a good-faith effort to estimate these costs, because of data limitations, neither GAO nor FBI could reliably determine whether all of the HIPAA transfers were spent solely for health care fraud investigations and related activities for the 4-year period. DOJ is currently planning the implementation of a new DOJ-wide UFMS, but it has yet to develop the specific systems requirements that would enable FBI to accurately capture all of its health care fraud-related costs and therefore to help monitor compliance with HIPAA and other relevant laws and regulations.
History is a good teacher. To solve the problems of today, it is instructive to look to the past. The problems with the department’s financial management operations date back decades, and previous attempts at reform have largely proven unsuccessful. These problems adversely affect DOD’s ability to control costs, ensure basic accountability, anticipate future costs and claims on the budget (such as for health care, weapon systems, and environmental liabilities), measure performance, maintain funds control, prevent fraud, and address pressing management issues. In this regard, I would like to briefly highlight three of our recent products that exemplify the adverse impact of DOD’s reliance on fundamentally flawed financial management systems and processes and a weak overall internal control environment. In a testimony before your subcommittee last week, we highlighted continuing problems with internal controls over approximately $64 million in fiscal year 2001 purchase card transactions involving two Navy activities. Consistent with our testimony last July on fiscal year 2000 purchase card transactions at these locations, our follow-up review demonstrated that continuing control problems left these Navy activities vulnerable to fraudulent, improper, and abusive purchases and theft and misuse of government property. We are currently auditing purchase card usage across the department. In a testimony before your Subcommittee in July 2001, we reported that DOD did not have adequate systems, controls, and managerial attention to ensure that $2.7 billion of adjustments to closed appropriations were legal and otherwise proper. Our review of $2.2 billion of these adjustments found that about $615 million of them should not have been made, including about $146 million that were illegal. In June 2001, we reported that DOD’s current financial systems could not adequately track and report on whether the $1.1 billion in earmarked funds that the Congress provided to DOD for spare parts and associated logistical support were actually used for the intended purpose. The vast majority of the funds—92 percent—were transferred to the military services operation and maintenance accounts. We found that once these funds were transferred, DOD lost its ability to assure the Congress that the funds it received for spare parts purchases were used for, and only for, that purpose. Problems with the department’s financial management operations go far beyond its accounting and finance systems and processes. The department continues to rely on a far-flung, complex network of finance, logistics, personnel, acquisition, and other management information systems— 80 percent of which are not under the control of the DOD Comptroller—to gather the financial data needed to support day-to-day management decisionmaking. This network was not designed to be, but rather has evolved into, the overly complex and error-prone operation that exists today, including (1) little standardization across DOD components, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, (4) manual data entry into multiple systems, and (5) a large number of data translations and interfaces that combine to exacerbate problems with data integrity. DOD has determined, for example, that efforts to reconcile a single contract involving 162 payments resulted in an estimated 15,000 adjustments. Many of the department’s business processes in operation today are mired in old, inefficient processes and legacy systems, some of which go back to the 1950s and 1960s. For example, while implemented in 1968, the department still relies on the Mechanization of Contract Administration Services (MOCAS) system to process a substantial portion of the contract payment transactions for all DOD organizations. In fiscal year 2001, MOCAS processed an estimated $78 billion in contract payments. Past efforts to replace MOCAS have failed. Most recently, in 1994, DOD began acquiring the Standard Procurement System (SPS) to replace the contract administration functions currently performed by MOCAS. However, our July 2001 and February 2002 reporting on DOD’s $3.7 billion investment in SPS showed that this substantial investment was not economically justified and raised questions as to whether further investment in SPS was justified. For the foreseeable future, DOD will continue to be saddled with MOCAS. Moving to the 1970s, we, the Defense Inspector General, and the military service audit organizations, issued numerous reports detailing serious problems with the department’s financial management operations. For example, between 1975 and 1981, we issued more than 75 reports documenting serious problems with DOD’s existing cost, property, fund control, and payroll accounting systems. In the 1980s, we found that despite the billions of dollars invested in individual systems, these efforts, too, fell far short of the mark, with extensive schedule delays and cost overruns. For example, in 1989, our report on eight major DOD system development efforts—including two major accounting systems--under way at that time, showed that system development cost estimates doubled, two of the eight efforts were abandoned, and the remaining six efforts experienced delays of from 3 to 7 years. Beginning in the 1990s, following passage of the Chief Financial Officers (CFO) Act of 1990, there was a recognition in DOD that broad-based financial management reform was needed. Over the past 12 years, the department has initiated several departmentwide reform initiatives intended to fundamentally reform its financial operations as well as other key business support processes, including the Corporate Information Management initiative, the Defense Business Operations Fund, and the Defense Reform Initiative. These efforts, which I will highlight today, have proven to be unsuccessful despite good intentions and significant effort. The conditions that led to these previous attempts at reform remain largely unchanged today. Corporate Information Management. The Corporate Information Management (CIM), initiative, begun in 1989, was expected to save billions of dollars by streamlining operations and implementing standard information systems. CIM was expected to reform all DOD’s functional areas--including finance, procurement, material management, and human resources--through consolidating, standardizing, and integrating information systems. DOD also expected CIM to replace approximately 2,000 duplicative systems. Over the years, we have made numerous recommendations to improve CIM’s management, but these recommendations were largely not addressed. Instead, DOD spent billions of dollars with little sound analytical justification. We reported in 1997,that 8 years after beginning CIM, and spending about $20 billion on the initiative, expected savings had yet to materialize. The initiative was eventually abandoned. Defense Business Operations Fund. In October 1991, DOD established a new entity, the Defense Business Operations Fund by consolidating nine existing industrial and stock funds and five other activities operated throughout DOD. Through this consolidation, the fund was intended to bring greater visibility and management to the overall cost of carrying out certain critical DOD business operations. However, from its inception, the fund was plagued by management problems. In 1996, DOD announced the fund’s elimination. In its place, DOD established four working capital funds. These new working capital funds inherited their predecessor’s operational and financial reporting problems. Defense Reform Initiative (DRI). In announcing the DRI program in November 1997, the then Secretary of Defense stated that his goal was “to ignite a revolution in business affairs.” DRI represented a set of proposed actions aimed at improving the effectiveness and efficiency of DOD’s business operations, particularly in areas that have been long-standing problems—including financial management. In July 2000, we reportedthat while DRI got off to a good start and made progress in implementing many of the component initiatives, it did not meet expected time frames and goals, and the extent to which savings from these initiatives will be realized is yet to be determined. GAO is currently examining the extent to which DRI efforts begun under the previous administration are continuing. The past has clearly taught us that addressing DOD’s serious financial management problems will not be easy. Early in his tenure, Secretary Rumsfeld commissioned a new study of the department’s financial management operations. The report on the results of the study, Transforming Department of Defense Financial Management: A Strategy for Change, was issued on April 13, 2001. The report recognized that the department will have to undergo “a radical financial management transformation” and that it would take more than a decade to achieve. The report concluded that many studies and interviews with current and former leaders in DOD point to the same problems and frustrations, and that repeated audit reports verify systemic problems illustrating the need for radical transformation in order to achieve success. Secretary Rumsfeld further confirmed the need for a fundamental transformation of DOD in his “top-down” Quadrennial Defense Review. Specifically, his September 30, 2001, Quadrennial Defense Review Report concluded that the department must transform its outdated support structure, including decades-old financial systems that are not well interconnected. The report summed up the challenge well in stating: “While America’s businesses have streamlined and adopted new business models to react to fast-moving changes in markets and technologies, the Defense Department has lagged behind without an overarching strategy to improve its business practices.” As part of our constructive engagement approach with DOD, the Comptroller General met with Secretary Rumsfeld last summer to provide our perspectives on the underlying causes of the problems that have impeded past reform efforts at the department and to discuss options for addressing these challenges. There are four underlying causes a lack of sustained top-level leadership and management accountability deeply embedded cultural resistance to change, including military service parochialism and stovepiped operations; a lack of results-oriented goals and performance measures and monitoring; and inadequate incentives for seeking change. Historically, DOD has not routinely assigned accountability for performance to specific organizations or individuals that have sufficient authority to accomplish desired goals. For example, under the CFO Act, it is the responsibility of agency CFOs to establish the mission and vision for the agency’s future financial management. However, at DOD, the Comptroller—who is by statute the department’s CFO---has direct responsibility for only an estimated 20 percent of the data relied on to carry out the department’s financial management operations. The department has learned through its efforts to meet the Year 2000 computing challenge that to be successful, major improvement initiatives must have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense. In the Year 2000 case, the then Deputy Secretary of Defense was personally and substantially involved and played a major role in the department’s success. Such top-level support and attention helps ensure that daily activities throughout the department remain focused on achieving shared, agencywide outcomes. A central finding from our report on our survey of best practices of world-class financial management organizations---Boeing; Chase Manhattan Bank; General Electric; Pfizer; Hewlett-Packard; Owens Corning; and the states of Massachusetts, Texas, and Virginia—was that clear, strong executive leadership was essential to (1) making financial management an entitywide priority, (2) redefining the role of finance, (3) providing meaningful information to decisionmakers, and (4) building a team of people that deliver results. DOD past experience has suggested that top management has not had a proactive, consistent, and continuing role in building capacity, integrating daily operations for achieving performance goals, and creating incentives. Sustaining top management commitment to performance goals is a particular challenge for DOD. In the past, the average 1.7--year tenure of the department’s top political appointees has served to hinder long-term planning and follow-through. Cultural resistance to change and military service parochialism have also played a significant role in impeding previous attempts to implement broad-based management reforms at DOD. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization, and that many of these practices were developed piecemeal and evolved to accommodate different organizations, each with its own policies and procedures. For example, as discussed in our July 2000 report, the department encountered resistance to developing departmentwide solutions under the then Secretary’s broad-based DRI. In 1997, the department established a Defense Management Council—including high-level representatives from each of the military services and other senior executives in the Office of the Secretary of Defense—which was intended to serve as the “board of directors” to help break down organizational stovepipes and overcome cultural resistance to changes called for under DRI. However, we found that the council’s effectiveness was impaired because members were not able to put their individual military services’ or DOD agencies’ interests aside to focus on department-wide approaches to long-standing problems. We have also seen an inability to put aside parochial views. Cultural resistance to change has impeded reforms not only in financial management, but also in other business areas, such as weapon system acquisition and inventory management. For example, as we reported last year, while the individual military services conduct considerable analyses justifying major acquisitions, these analyses can be narrowly focused and do not consider joint acquisitions with the other services. In the inventory management area, DOD’s culture has supported buying and storing multiple layers of inventory rather than managing with just the amount of stock needed. Further, DOD’s past reform efforts have been handicapped by the lack of clear, linked goals and performance measures. As a result, DOD managers lack straightforward road maps showing how their work contributes to attaining the department’s strategic goals, and they risk operating autonomously rather than collectively. In some cases, DOD had not yet developed appropriate strategic goals, and in other cases, its strategic goals and objectives were not linked to those of the military services and defense agencies. As part of our assessment of DOD’s Fiscal Year 2000 Financial Management Improvement Plan, we reported that, for the most part, the plan represented the military services’ and Defense components’ stovepiped approaches to reforming financial management and did not clearly articulate how these various efforts would collectively result in an integrated DOD-wide approach to financial management improvement. In addition, we reported that the department’s plan did not include performance measures that could be used to assess DOD’s progress in resolving its financial management problems. DOD officials have informed us that they are now working to revise the department’s approach to this plan so that future years’ updates will reflect a more strategic, departmentwide vision and provide a more effective tool for financial management reform. As it moves to modernize its systems, the department faces a formidable challenge in responding to technological advances that are changing traditional approaches to business management. For fiscal year 2001, DOD’s reported total information technology investments of almost $23 billion supporting a wide range of military operations as well as DOD business functions. As we have reported, while DOD plans to invest billions of dollars in modernizing its financial management and other business support systems, it does not yet have an overall blueprint—or enterprise architecture—in place to guide and direct these investments. As we recently testified, our review of practices at leading organizations showed they were able to make sure their business systems addressed corporate—rather than individual business unit—objectives by using enterprise architectures to guide and constrain investments. Consistent with our recommendation, DOD is now working to develop a financial management enterprise architecture, which is a very positive development. The final underlying cause of the department’s long-standing inability to carry out needed fundamental reform has been the lack of incentives for making more than incremental change to existing “business-as-usual” processes, systems, and structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs have produced. DOD generally measures its performance by the amount of money spent, people employed, or number of tasks completed. Incentives for its decision makers to implement changed behavior have been minimal or nonexistent. Secretary Rumsfeld perhaps said it best in announcing his planned transformation at DOD: “There will be real consequences from, and real resistance to, fundamental change.” This lack of incentive has perhaps been most evident in the department’s acquisition area. In DOD’s culture, the success of a manager’s career has depended more on moving programs and operations through the DOD process rather than on achieving better program outcomes. The fact that a given program may have cost more than estimated, taken longer to complete, and not generated results or performed as promised was secondary to fielding a new program. To effect real change, actions are needed to (1) break down parochialism and reward behaviors that meet DOD-wide and congressional goals; (2) develop incentives that motivate decisionmakers to initiate and implement efforts that are consistent with better program outcomes, including saying “no” or pulling the plug on a system or program that is failing; and (3) facilitate a congressional focus on results-oriented management, particularly with respect to resource allocation decisions. As we testified before your Subcommittee last May, our experience has shown there are several key elements that, collectively would enable the department to effectively address the underlying causes of its inability to resolve its long-standing financial management problems. These elements, which are key to any successful approach to financial management reform, include addressing the department’s financial management challenges as part of a comprehensive, integrated, DOD-wide business process reform; providing for sustained leadership by the Secretary of Defense and resource control to implement needed financial management reforms; establishing clear lines of responsibility, authority, and accountability for such reform tied to the Secretary; incorporating results-oriented performance measures and monitoring tied to financial management reforms; providing appropriate incentives or consequences for action or inaction; establishing an enterprisewide system architecture to guide and direct financial management modernization investments; and ensuring effective oversight and monitoring. Actions on many of the key areas central to successfully achieving desired financial management and related business process transformation goals —particularly those that rely on longer term systems improvements—will take a number of years to fully implement. Secretary Rumsfeld has estimated that his envisioned transformation may take 8 or more years to complete. Consequently, both long-term actions focused on the Secretary’s envisioned business transformation and short-term actions focused on improvements within existing systems and processes will be critical going forward. Short-term actions in particular will be critical if the department is to achieve the greatest possible accountability over existing resources and more reliable data for day-to-day decisionmaking while longer-term systems and business process reengineering efforts are under way. Beginning with the Secretary’s recognition of a need for a fundamental transformation of the department’s business processes, and building on some of the work begun under past administrations, DOD has taken a number of positive steps in many of these key areas. At the same time, the challenges remaining in each of these key areas are daunting. As we have reported in the past,establishing the right goal is essential for success. Central to effectively addressing DOD’s financial management problems will be the recognition that they cannot be addressed in an isolated, stovepiped, or piecemeal fashion separate from the other high-risk areas facing the department. Successfully reengineering the department’s processes supporting its financial management and other business support operations will be critical if DOD is to effectively address the deep-rooted organizational emphasis on maintaining business-as-usual across the department. Financial management is a crosscutting issue that affects virtually all of DOD’s business areas. For example, improving its financial management operations so that they can produce timely, reliable, and useful cost information will be essential if the department is to effectively measure its progress toward achieving many key outcomes and goals across virtually the entire spectrum of DOD’s business operations. At the same time, the department’s financial management problems—and, most importantly, the keys to their resolution---are deeply rooted in and dependent upon developing solutions to a wide variety of management problems across DOD’s various organizations and business areas. For example, we have reported that many of DOD’s financial management shortcomings were attributable in part to human capital issues. The department does not yet have a strategy in place for improving its financial management human capital. This is especially critical in connection with DOD’s civilian workforce, since DOD has generally done a much better job in conjunction with human capital planning for its military personnel. In addition, DOD’s civilian personnel face a variety of size, shape, skills, and succession- planning challenges that need to be addressed. As I mentioned earlier, and it bears repetition, the department has reported that an estimated 80 percent of the data needed for sound financial management comes from its other business operations, such as its acquisition and logistics communities. DOD’s vast array of costly, nonintegrated, duplicative, and inefficient financial management systems is reflective of its lack of an enterprisewide, integrated approach to addressing management challenges. DOD has acknowledged that one of the reasons for the lack of clarity in its reporting under the Government Performance and Results Act has been that most of the program outcomes the department is striving to achieve are interrelated, while its management systems are not integrated. As I discussed previously, the Secretary of Defense has made the fundamental transformation of business practices throughout the department a top priority. In this context, the Secretary established a number of top-level committees, councils, and boards, including the Senior Executive Committee, Business Initiative Council, and Defense Business Practices Implementation Board. The Senior Executive Committee was established to help guide efforts across the department to improve its business practices. This committee--chaired by the Secretary of Defense, and with membership to include the Deputy Secretary, the military service secretaries, and the Under Secretary of Defense for Acquisition, Logistics and Technology—was established to function as the board of directors for the department. The Business Initiative Council--comprising the military service secretaries and headed by the Under Secretary of Defense for Acquisition, Technology and Logistics--was established to encourage the military services to explore new money saving business practices to help offset funding requirements for transformation and other initiatives. The Secretary also established the Defense Business Practices Implementation Board, composed of business leaders from the private sector. The board is intended to tap outside expertise to advise the department on its efforts to improve business practices. The department’s successful Year 2000 effort illustrated, and our survey of leading financial management organizations captured, the importance of strong leadership from top management. As we have stated many times before, strong, sustained executive leadership is critical to changing a deeply rooted corporate culture—such as the existing “business as usual” culture at DOD—and to successfully implementing financial management reform. As I mentioned earlier, the personal, active involvement of the Deputy Secretary of Defense played a key role in building entitywide support and focus for the department’s Year 2000 initiatives. Given the long-standing and deeply entrenched nature of the department’s financial management problems--combined with the numerous competing DOD organizations, each operating with varying, often parochial views and incentives--such visible, sustained top-level leadership will be critical. In discussing their April 2001 report to the Secretary of Defense on transforming financial management, the authors stated that, “unlike previous failed attempts to improve DOD’s financial practices, there is a new push by DOD leadership to make this issue a priority.” With respect to the key area of investment control, the Secretary took action to set aside $100 million for financial modernization. Strong, sustained executive leadership—over a number of years and administrations—will be key to changing a deeply rooted culture. In addition, given that significant investments in information systems and related processes have historically occurred in a largely decentralized manner throughout the department, additional actions will likely be required to implement a centralized IT investment control strategy. For example, in our May 2001 report, we recommended that DOD take action to establish centralized control over transformation investments to ensure that funding is provided for only those proposed investments in systems and business processes that are consistent with the department’s overall business process transformation strategy. Last summer, when the Comptroller General met with Secretary Rumsfeld, he stressed the importance of establishing clear lines of responsibility, decision-making authority, and resource control for actions across the department tied to the Secretary as a key to reform. As we previously reported, such an accountability structure should emanate from the highest levels and include the secretary of each of the military services as well as heads of the department’s various major business areas. The Secretary of Defense has taken action to vest responsibility and accountability for financial management modernization with the DOD Comptroller. In October 2001, the DOD Comptroller established the Financial Management Modernization Executive and Steering Committees as the governing bodies that oversee the activities related to this modernization effort and also established a supporting working group to provide day-to-day guidance and direction in these efforts. DOD reports that the executive and steering committees met for the first time in January 2002. It is clear to us that the Comptroller has the full support of the Secretary and that the Secretary is committed to making meaningful change. To make this work, it is important that the Comptroller have sufficient authority to bring about the full, effective participation of the military services and business process owners across the department. The Comptroller has direct control of 20 percent of the data needed for sound financial management and has historically had limited ability to control information technology investments across the department. Addressing issues such as centralization of authority for information systems investments and continuity of leadership are critical to successful business process transformation. In addition to DOD, a number of other federal departments and agencies are facing an array of interrelated business system management challenges for which resolution is likely to require a number of years, challenges that could span administrations. One option that may have merit would be the establishment of chief operating officers, who could be appointed for a set term of 5 to 7 years with the potential for reappointment. These individuals should have a proven track record as a business process change agents for large, diverse organizations and would spearhead business process transformation across the department or agency. As discussed in our January 2001 report on DOD’s major performance and accountability challenges, establishing a results orientation is another key element of any approach to reform. Such an orientation should draw upon results that could be achieved through commercial best practices, including outsourcing and shared servicing concepts. Personnel throughout the department must share the common goal of establishing financial management operations that not only produce financial statements that can withstand the test of an audit but, more importantly, routinely generate useful, reliable, and timely financial information for day- to-day management purposes. In addition, we have previously testified that DOD’s financial management improvement efforts should be measured against an overall goal of effectively supporting DOD’s basic business processes, including appropriately considering related business process system interrelationships, rather than determining system-by-system compliance. Such a results-oriented focus is also consistent with an important lesson learned from the department’s Year 2000 experience. DOD’s initial Year 2000 focus was geared toward ensuring compliance on a system-by-system basis and did not appropriately consider the interrelationships of systems and business areas across the department. It was not until the department, under the direction of the then Deputy Secretary, shifted to a core mission and function review approach that it was able to achieve the desired result of greatly reducing its Year 2000 risk. Since the Secretary has established an overall business process transformation goal that will require a number of years to achieve, going forward, it is especially critical for managers throughout the department to focus on specific measurable metrics that, over time, collectively will translate to achieving this overall goal. It is important for the department to refocus its annual accountability reporting on this overall goal of fundamentally transforming the department’s financial management systems and related business processes to include appropriate interim annual measures for tracking progress toward this goal. In the short term, it is important to focus on actions that can be taken using existing systems and processes. It is critical to establish interim measures to both track performance against the department’s overall transformation goals and facilitate near- term successes using existing systems and processes. The department has established an initial set of metrics intended to evaluate financial performance, and it reports that it has seen improvements. For example, with respect to closed appropriation accounts, DOD reported during the first 4 months of fiscal year 2002 a reduction in the dollar value of adjustments to closed appropriation accounts of about 51 percent from the same 4-month period in fiscal year 2001. Other existing metrics concern cash and funds management, contract and vendor payments, and disbursement accounting. DOD also reported that it is working to develop these metrics into higher-level measures more appropriate for senior management. We agree with the department’s efforts to expand the use of appropriate metrics to guide its financial management reform efforts. Another key to breaking down the parochial interests and stovepiped approaches that have plagued previous reform efforts is establishing mechanisms to reward organizations and individuals for behaviors that comply with DOD-wide and congressional goals. Such mechanisms should be geared to providing appropriate incentives and penalties to motivate decision makers to initiate and implement efforts that result in fundamentally reformed financial management and other business support operations. In addition, such incentives and consequences are essential if DOD is to break down the parochial interests that have plagued previous reform efforts. Incentives driving traditional ways of doing business, for example, must be changed, and cultural resistance to new approaches must be overcome. Simply put, DOD must convince people throughout the department that they must change from business-as-usual systems and practices or they are likely to face serious consequences, organizationally and personally. Establishing and implementing an enterprisewide financial management architecture is essential for the department to effectively manage its large, complex system modernization effort now under way. The Clinger-Cohen Act requires agencies to develop, implement, and maintain an integrated system architecture. As we previously reported,such an architecture can help ensure that the department invests only in integrated, enterprisewide business system solutions and, conversely, will help move resources away from non-value-added legacy business systems and nonintegrated business system development efforts. In addition, without an architecture, DOD runs the serious risk that its system efforts will perpetuate the existing system environment that suffers from systems duplication, limited interoperability, and unnecessarily costly operations and maintenance. In our May 2001 report, we pointed out that DOD lacks a financial management enterprise architecture to guide and constrain the billions of dollars it plans to spend to modernize its financial management operations and systems. DOD has reported that it is in the process of contracting for the development of a DOD-wide financial management enterprise architecture to “achieve the Secretary’s vision of relevant, reliable and timely financial information needed to support informed decision-making.” Consistent with our previous recommendations in this area, DOD has begun an extensive effort to document the department’s current as-is financial management architecture by inventorying systems now relied on to carry out financial management operations throughout the department. DOD has identified 674 top-level systems and at least 997 associated interfaces thus far and estimates that this inventory could include up to 1,000 systems when completed. While DOD’s beginning efforts at developing a financial management enterprise architecture are off to a good start, the challenges yet confronting the department in its efforts to fully develop, implement, and maintain a DOD-wide financial management enterprise architecture are unprecedented. Our May 2001 report details a series of recommended actions directed at ensuring DOD employs recognized best practices for enterprise architecture management. This effort will be further complicated as the department strives to develop multiple enterprise architectures across its various business areas. For example, in June 2001, we recommendedthat DOD develop an enterprise architecture for its logistics operations. As I discussed previously, an integrated reform strategy is critical. In this context, it is essential that DOD closely coordinate and integrate the development and implementation of these, as well as other, architectures. By following this integrated approach and our previous recommendations, DOD will be in the best position to avoid the serious risk that, after spending billions of dollars on systems modernization, it will perpetuate the existing systems environment that suffers from duplication of systems, limited interoperability, and unnecessarily costly operations and maintenance. Ensuring effective monitoring and oversight of progress will also be a key to bringing about effective implementation of the department’s financial management and related business process reform. We have previously testified that periodic reporting of status information to department top management, the Office of Management and Budget (OMB), the Congress, and the audit community is another key lesson learned from the department’s successful effort to address its Year 2000 challenge. Previous submissions of its Financial Management Improvement Plan have simply been compilations of data call information on the stovepiped approaches to financial management improvements received from the various DOD components. It is our understanding that DOD plans to change its approach and anchor its plans in an enterprise architecture. If the department’s future plans are upgraded to provide a departmentwide strategic view of the financial management challenges facing the department, along with planned corrective actions, these plans can serve as an effective tool not only to help guide and direct the department’s financial management reform efforts, but also to help maintain oversight of the department’s financial management operations. Going forward, this Subcommittee’s annual oversight hearings, as well the active interest and involvement of other cognizant defense and oversight committees in the Congress, will continue to be key to effectively achieving and sustaining DOD’s financial management and related business process reform milestones and goals. Given the size, complexity, and deeply engrained nature of the financial management problems facing DOD, heroic end-of-the year efforts relied on by some agencies to develop auditable financial statement balances are not feasible at DOD. Instead, a sustained focus on the underlying problems impeding the development of reliable financial data throughout the department will be necessary and is the best course of action. In this context, the Congress recently enacted the fiscal year 2002 National Defense Authorization Act, which contains provisions that will provide a framework for redirecting the department’s resources from the preparation and audit of financial statements, which are acknowledged by DOD leadership to be unauditable, to the improvement of DOD’s financial management systems and financial management policies, procedures, and internal controls. Under this new legislation, the department will also be required to report to the Congress on how resources have been redirected and the progress that has been achieved. This reporting will provide an important vehicle for the Congress to use in assessing whether DOD is using its available resources to best bring about the development of timely and reliable financial information for daily decision making and transform its financial management as envisioned by the Secretary of Defense. In conclusion, we support Secretary Rumsfeld’s vision for transforming the department’s full range of business processes. Substantial personal involvement by the Secretary and other DOD top executives will be essential to change the DOD culture that has over time perpetuated the status quo and been resistant to a transformation of the magnitude envisioned by the Secretary. Comptroller Zakheim, as the Secretary’s leader for financial management modernization, will need to have the ability to make the tough choices on systems, processes, and personnel, and to control spending for new systems across the department, especially where new systems development is involved. Processes will have to be reengineered, and hierarchical, process-oriented, stovepiped, and internally focused approaches will have to be put aside. The past has taught us that well-intentioned initiatives will only succeed if there are the right incentives, transparency, and accountability mechanisms in place. The events of September 11 and other funding and asset accountability issues associated with the war on terrorism, at least in the short term, may dilute the focused attention and sustained action that are necessary to fully realize the Secretary’s transformation goal, which is understandable given the circumstances. At the same time, the demand for increased Defense spending, when combined with the government’s long-range fiscal challenges, means that solutions to DOD’s business systems problems are even more important. As the Secretary has noted, billions of dollars of resources could be freed up for national defense priorities by eliminating waste and inefficiencies in DOD’s existing business processes. Only time will tell if the Secretary’s current transformation efforts will come to fruition. Others have attempted well-intentioned reform efforts in the past. Today, the momentum exists for reform. But, the real question remains, will this momentum continue to exist tomorrow, next year, and throughout the years to make the necessary cultural, systems, human capital, and other key changes a reality? For our part, we will continue to work constructively with the department and the Congress in this important area.
Financial management problems at the Department of Defense (DOD) are complex, long-standing, and deeply rooted throughout its business operations. DOD's financial management deficiencies represent the single largest obstacle to achieving an unqualified opinion on the U.S. government's consolidated financial statements. So far, none of the military services or major DOD components have passed the test of an independent financial audit because of pervasive weaknesses in financial management systems, operations, and controls. These problems go back decades, and earlier attempts at reform have been unsuccessful. DOD continues to rely on a far-flung, complex network of finance, logistics, personnel, acquisition, and other management information systems for financial data to support day-to-day management and decision-making. This network has evolved into an overly complex and error-prone operation with (1) little standardization across DOD components; (2) multiple systems performing the same tasks; (3) the same data stored in multiple systems; (4) manual data entry into multiple systems; and (5) a large number of data translations and interfaces, which combine to exacerbate problems with data integrity. Many of the elements that are crucial to financial management reform and business process transformation--particularly those that rely on long-term systems improvements--will take years to fully implement.
One of FHA’s primary goals is to assist those households that are unable to meet the requirements of the private market for mortgages and mortgage insurance or that live in underserved areas where mortgages may be harder to obtain. In doing so, FHA applies more flexible underwriting standards than the private market generally allows. Borrowers seeking FHA-insured loans may make smaller down payments (as a percentage of the purchase price) than the private market requires and may also include in the amount they borrow most costs associated with closing the loan, rather than using cash for those expenses, as private lenders generally require. FHA is required by statute to set limits on the dollar amount of individual loans it will insure. These limits are based, in part, on local median home prices. The Finance Board surveys major mortgage lenders each month, collecting information on the terms and conditions (including the sales prices of homes) of conventional single-family home loans closed during the last 5 business days of the month. The Finance Board may not require that lenders participate in its survey. Those doing so participate voluntarily. Fannie Mae and Freddie Mac, both government-sponsored enterprises, are parts of the secondary mortgage market, through which many single-family home mortgages are ultimately sold. Federal law requires that Fannie Mae and Freddie Mac use information from the Finance Board’s survey on the year-to-year change in house prices to annually adjust the conforming loan limit (currently $240,000), which is a legislative restriction on the size of any individual loan that either may buy. FHA also uses information from the Finance Board to set limits on the dollar value of loans it will insure, which are based on the conforming loan limit and median home prices. That is, FHA sets an area’s loan limit at the greater of 48 percent of the conforming loan limit or 95 percent of the median home sales price for the area, but no greater than 87 percent of the conforming loan limit. Consequently, FHA loan limits vary depending on the location of the home and the median home sales price there but are no lower than $115,200 and no higher than $208,800—48 percent and 87 percent, respectively, of the conforming loan limit. FHA is not required by statute to use a particular source of information on home prices to determine the median price of homes in an area and, consequently, the loan limit for the area. However, FHA has chosen to use the Finance Board survey for this purpose. FHA relies heavily on the survey to measure median home sales prices because it is the most comprehensive source of published house price data readily available to the agency. The Office of Federal Housing Enterprise Oversight (OFHEO) also collects information on home sales. Specifically, both Fannie Mae and Freddie Mac provide data to OFHEO on all of the mortgages they purchase in order for OFHEO to construct a house price index. OFHEO uses the house price index to account for changes in the values of the homes securing the mortgages that the enterprises have purchased and their potential impact on credit risk. By definition, the index includes only conforming loans—those with values less than the conforming loan limit—because neither enterprise may purchase loans that exceed the conforming loan limit. In addition, the index excludes all government-insured loans. In 1997, Fannie Mae and Freddie Mac purchased 37 percent of all conventional loans originated that year for single-family homes. In about two-thirds of the 42 metropolitan areas we reviewed, no substantive difference existed in the 1997 median house prices calculated with Finance Board and OFHEO data. As figure 1 shows, in 27 of these areas, the difference between the higher and lower estimates of median prices according to the two sets of data was 5 percent or less. In an additional 10 areas, the two agencies’ estimates were within 10 percent of each other. For the 15 areas where the two estimates differed by more than 5 percent, the Finance Board estimated a higher median home sales price than OFHEO in 10 areas, while OFHEO estimated a higher median in 5 areas. Less than or equal to 5% (27 areas) Because one basis for measuring the median price—the Finance Board’s data—does not result in a substantively different price than another—OFHEO’s data—for about two-thirds of the areas we reviewed, FHA loan limits in those areas would be similar using either source of data. The median price determines whether FHA sets its loan limit at a percentage of the conforming limit or at 95 percent of the median home price. The Finance Board’s data were a reasonable measure of an area’s median home sales price (for homes with conventional financing), according to officials at the Finance Board, Fannie Mae, and Freddie Mac. As support for this position, they cited the similarity in the median prices that the Finance Board and OFHEO calculated in about two-thirds of the areas we reviewed. Given the differences in the nature of the data the Finance Board collects through a survey of lenders versus OFHEO’s data, which represent all the conforming loans Fannie Mae and Freddie Mac purchased, these officials stated that they viewed each data set as supportive of the other. Regardless of the similarities and differences in the median home prices derived from these two sources, the different methods by which the Finance Board and OFHEO collect home price data could explain some variation between them. For example, while the Finance Board survey is intended to estimate house prices at the national level, estimates at the local level are likely to be less representative of all non-government-insured home sales for that area. These officials added that there would be even fewer differences between the two if we analyzed trends in median price data over a number of years. The Finance Board’s survey included some higher-priced homes that would not be reflected in OFHEO’s data, sometimes resulting in a higher calculated median price than OFHEO’s data reflected. In fact, in 26 of the areas we reviewed, the Finance Board data showed higher median home sales prices than did the OFHEO data. According to officials from Fannie Mae and Freddie Mac, including these jumbo loans is the primary reason for that difference. Specifically, the Finance Board’s data included purchase prices up to $750,000 and loan amounts up to $500,000.Conversely, OFHEO’s data excluded all jumbo loans because they exceed the conforming loan limit ($214,600 in 1997), meaning neither Fannie Mae nor Freddie Mac could have purchased them. Supplementing the Finance Board’s or OFHEO’s data with information on prices of homes financed with government-insured mortgages reduces the estimates of median prices across the board and within all of the metropolitan areas we reviewed. Homes financed with government-insured mortgages typically cost less than homes financed with conventional mortgages, but the Finance Board and OFHEO (with few exceptions) collect data only on homes financed with conventional mortgages. Hence, including data on government-insured mortgages in the calculation for a given area results in a lower median price of homes. As table 1 shows, the effect of adding data on homes with government-insured mortgages to the Finance Board’s and OFHEO’s median price estimates is not uniform across all of the metropolitan areas; that is, it does not reduce the median in each area by the same amount. When we added data on homes with government-insured loans to the Finance Board’s data, median prices in individual metropolitan areas were 2 to 30 percent lower. When we added these data to OFHEO’s data, median prices were 6 to 31 percent lower. The effect that including government-insured loans has on the estimated median price of homes in any given area depends on how much government-insured lending (relative to all other types of lending) was taking place in that area. Where government-insured lending was a relatively higher percentage of home loans, median prices decreased by a greater degree than the decrease for the 42 areas taken as a whole. Conversely, where there was relatively less government-insured lending in any given area, median prices also decreased—but to a lesser degree than in metropolitan areas with more government-insured lending. FHA is exploring additional data sources to supplement the Finance Board’s data and to improve its own measurement of median house prices. In part, this is in recognition of the importance and value of timely and comprehensive data on house prices at the local level for its own purposes as well as larger, research-oriented uses. Also, recent legislative changes have made it more important for FHA to have accurate local-area measures of house prices on which to base loan limits. However, FHA has found no source that systematically collects house price data on an ongoing basis in all of the areas—metropolitan areas and counties—for which FHA must set loan limits. As a result, FHA has stepped up its efforts to determine the availability of, and any limitations associated with, additional data sources on home prices. FHA’s most pressing reason for developing additional data sources is a provision in recent legislation mandating that the highest loan limit of any county within a metropolitan area must apply to loans insured in all the counties in that area. To implement this provision as part of a recent comprehensive update of all FHA loan limits, FHA supplemented its primary data source, the Finance Board survey, with data from the National Association of Realtors and a private marketing firm that collects and sells data from real estate transaction records. To a limited extent, FHA also had its field staff work with local interested parties, such as realtors’ associations, to gather sufficient recent data on which to base an estimate of an area’s median house price. Nonetheless, FHA officials told us that for over half of those areas whose loan limits were not automatically indexed to the conforming loan limit, the Finance Board’s survey was their primary source of median house price data. FHA’s goal is to comprehensively update all of its loan limits annually and, in doing so, to make use of additional data sources to broaden the extent to which its estimates of median house prices cover more of the housing market. To do so, FHA recently initiated preliminary discussions with OFHEO about making use of its data (similar to the data OFHEO provided to us) in its next comprehensive update. In addition, FHA is considering obtaining data on jumbo loans as well as other loans that Fannie Mae and Freddie Mac have not purchased. FHA has no specific timetable for including such data, in large part because the sources of the data on some of these loans do not include information on house sales prices, which makes using the data much more methodologically complex and time-consuming than using a database such as OFHEO’s. FHA has substantial discretion in choosing the source of median house price data it will use to set loan limits because, unlike the conforming loan limit, there is no statutory requirement for it to use a specific data source. Lacking a nationwide source of data that systematically collects comprehensive house price information in each and every area where FHA must set loan limits, the agency is left with the challenge of assembling the best data available to it. At present, its use of the Finance Board’s survey appears reasonable given that the only more comprehensive source of data that we found—HUD’s Office of Federal Housing Enterprise Oversight—usually yielded a similar median price. Nonetheless, while both the Finance Board and OFHEO offer measures of median prices that capture one particular segment of the housing market—homes with conventional financing—neither covers all of the housing market. As a result, FHA’s efforts to broaden its coverage of the housing market will be guided by a need to identify what its data sources are not capturing and a need to consider the implications for its loan limits and potential FHA borrowers of using any new data sources. We provided a draft of this report to the Department of Housing and Urban Development (HUD), the Federal Housing Finance Board (the Finance Board), the Office of Federal Housing Enterprise Oversight, the Federal National Mortgage Association, and the Federal Home Loan Mortgage Corporation for their review and comment. HUD agreed that the Finance Board’s survey is a reasonable source of home sales price data even though neither the survey nor the Office of Federal Housing Enterprise Oversight’s information on home sales covers the entire housing market. HUD commented that the report effectively describes the practices and resources it used to set FHA loan limits and identifies the data collection obstacles associated with this activity. HUD also provided technical corrections to the report, which we have incorporated. HUD’s comments are included as appendix II of this report. The Finance Board agreed that our analysis indicates its survey is a reasonable measure of 1997 home sales prices in the areas we reviewed. However, the Finance Board also commented that because its data come from a voluntary sample of mortgage lenders, it cannot ensure that its sample size in individual metropolitan areas or counties is large enough to provide statistically reliable results. The Finance Board stated that if the Congress wants HUD to use the survey, it should provide the Finance Board with the authority to require lenders to participate in the survey. We have reported in the past that users of the Finance Board’s data suggested the sample size would need to be expanded to make the data more reliable for measuring local housing prices. We have revised our description of the Finance Board’s survey to clarify that lenders participate in it voluntarily. The Finance Board’s comments are included as appendix III of this report. The Office of Federal Housing Enterprise Oversight provided technical corrections and clarifications to the report, which we have incorporated as appropriate. The Federal National Mortgage Association and the Federal Home Loan Mortgage Corporation also commented on the draft report. Both stated that they consider the data they provide the Office of Federal Housing Enterprise Oversight to be proprietary and confidential. We agreed to add this information to the report. The Federal Home Loan Mortgage Corporation also provided technical corrections to the report, which we have incorporated as appropriate. Our review covered homes sold in selected metropolitan areas in calendar year 1997 (1) for which FHA insured or the Department of Veterans Affairs guaranteed the mortgages on the homes; (2) about which the lenders issuing the mortgages for the homes reported data on the loans in the Federal Housing Finance Board’s monthly interest rate survey; or (3) that had mortgages that Fannie Mae or Freddie Mac subsequently purchased, reporting data on those loans to OFHEO. For this analysis, we focused on the 42 metropolitan statistical areas (MSA) for which the Finance Board publicly reports data annually. By definition, MSAs have at least one city with 50,000 inhabitants or are urbanized areas with a total metropolitan population of at least 100,000. Most MSAs consist of more than one county. For the 42 areas, we obtained (1) from FHA and the Department of Veterans Affairs, data on the median price of all of the homes sold for which the federal government insured or guaranteed the mortgages and (2) from the Finance Board, the median purchase price of all the homes sold whose mortgages were reflected in the Board’s monthly interest rate survey. Using data on loan amounts and loan-to-value ratios, OFHEO calculated and provided to us an estimate of the median price of all homes sold in these areas that had mortgages that were subsequently purchased by Fannie Mae or Freddie Mac. For this calculation, OFHEO used the data Fannie Mae and Freddie Mac provide it for the calculation of its house price index (unlike OFHEO’s house price index, the data it provided us for this review are not publicly released). We then compared the median purchase prices according to these sources of data. Because homes financed with government-insured loans are typically lower priced and neither the Finance Board nor OFHEO includes data on government-insured mortgages, we also calculated median purchase prices that included data from FHA and the Department of Veterans Affairs with the data from OFHEO and the Finance Board. Throughout our review, we discussed issues related to data sources for measuring house price changes with officials from FHA, OFHEO, HUD’s Office of Policy Development and Research, the Finance Board, Fannie Mae, and Freddie Mac. We also supplemented this information by discussing these issues with officials of private organizations having an interest or expertise in this area, including the National Association of Homebuilders and the Mortgage Insurance Companies of America. We also discussed the results of our analysis comparing median prices from the various sources with officials from the agencies that provided these data. We did not directly assess the reliability of the data we obtained from FHA, the Department of Veterans Affairs, OFHEO, or the Finance Board. To assure ourselves that each data set was sufficiently reliable for our purposes, we reviewed the procedures each agency had in place to ensure its data are reliable and accurate. We conducted our review from July 1998 through March 1999 in accordance with generally accepted government auditing standards. We are sending copies of this report to the appropriate congressional committees; the Honorable Andrew Cuomo, Secretary of Housing and Urban Development; the Honorable Bruce A. Morrison, Chairman of the Federal Housing Finance Board; the Honorable Mark Kinsey, Acting Director of the Office of Federal Housing Enterprise Oversight; the Honorable Franklin D. Raines, Chairman and Chief Executive Officer of Fannie Mae; the Honorable Leland C. Brendsel, Chairman and Chief Executive Officer of Freddie Mac; and the Honorable Jacob J. Lew, Director of the Office of Management and Budget. We will make copies available to others upon request. Please call me at (202) 512-7631 if you or your staff have any questions about the material in this report. Major contributors to this report are listed in appendix IV. Judy A. England-Joseph DuEwa A. Kamara Bill MacBlane Mathew Scire The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on housing prices from sources other than the Federal Housing Finance Board, focusing on: (1) comparing data on house prices from the Finance Board with data the Department of Housing and Urban Development's Office of Federal Housing Enterprise Oversight (OFHEO) collects to measure house price changes; (2) views of officials of the agencies involved on the results of this analysis; (3) the effect on median prices of supplementing the Finance Board's and OFHEO's data with information each does not already include on lower-priced homes with government-insured mortgages; and (4) the Federal Housing Administration's (FHA) recent efforts to explore alternative sources of data for measuring median home prices. GAO noted that: (1) the Finance Board and OFHEO's estimates of 1997 median home sales prices were similar in about two-thirds of the metropolitan areas GAO reviewed; (2) in 27 of the 42 areas, the two agencies' estimates were within 5 percent of each other; (3) loan limits based on either set of data would be similar in these areas; (4) for the remaining 15 areas GAO reviewed, the Finance Board estimated a higher median home sales price in 10 of the areas, while OFHEO's estimate was higher in 5 areas; (5) officials familiar with these data cited the low number of substantive differences in GAO's analysis as an indicator of the validity of the Finance Board's data; (6) because no substantive difference existed between the two sets of data in about two-thirds of the areas GAO reviewed, the officials indicated the Finance Board's data are a reasonable measure of an area's median sales price for homes without government-insured financing; (7) in those areas for which substantive differences did exist, officials from the different agencies involved agreed the reason was that the Finance Board includes larger loans, and thus higher home purchase prices, in its survey than does OFHEO; (8) the Finance Board's 1997 data include loan amounts up to $500,000 and house purchase prices up to $750,000; (9) OFHEO's data for 1997 included no loans greater than $214,600; (10) these officials also cited normal variations associated with surveys and statistical sampling as a reason for some differences between the two sets of data; (11) supplementing the data from either the Finance Board or the OFHEO with data on homes financed with government-insured loans would lower the estimated median home sales price in any given area by 2 to 31 percent; (12) the purchase prices of homes financed with mortgages insured by FHA and the Department of Veterans Affairs are, on average, lower than those of homes bought with privately insured financing; (13) these lower prices result from the limits on the size of individual loans FHA may insure and because government-insured financing tends to be focused on first-time homebuyers; (14) FHA is engaged in an effort to use additional sources of data; (15) FHA relies heavily on the Finance Board's survey for the data it needs to set its loan limits, but to a limited extent it has also supplemented that survey with data its field offices gather on local home sales prices; and (16) FHA is considering using data on loans that neither the Federal National Mortgage Association nor the Federal Home Loan Mortgage Corporation has purchased.
Following Iraq’s invasion of Kuwait in August 1990, the United States and other allied nations sent troops to the Persian Gulf region in Operation Desert Shield. In the winter of 1991, the allied forces attacked Iraq in an air campaign and subsequent invasion by ground forces (Operation Desert Storm). Despite the harsh environment, illness, injury, and death rates among approximately 700,000 U.S. military personnel were significantly lower than in previous major conflicts. Yet, shortly after the war, some veterans began reporting health problems that they believed might be due to their participation in the war. VA, DOD, HHS, and other federal agencies initiated research and investigations into these health concerns and the consequences of possible hazardous exposures. In 1993, the President designated VA as the lead coordinator of research activities on the health consequences of service in the Gulf War. Subsequently, in 1998, the Congress expanded VA’s coordination to include all Gulf War health-related activities. These activities include ensuring that the findings of all federal Gulf War illnesses research are made available to the public and that federal agencies coordinate outreach to Gulf War veterans in order to provide information on potential health risks from service in the Gulf War and corresponding services or benefits. The Secretary of VA is required to submit an annual report on the results, status, and priorities of federal research activities related to the health consequences of military service in the Gulf War to the Senate and House Veterans’ Affairs Committees. VA has provided these reports to congressional committees since 1995. In May 2004, VA issued its annual report for 2002. VA has carried out its coordinating role through the auspices of interagency committees, which have changed over time in concert with federal research priorities and needs. The mission of these interagency committees has evolved to include coordination for research on all hazardous deployments, including but not limited to the Gulf War. (See fig. 1.) Federal research efforts for Gulf War illnesses have been guided by questions established by the interagency Research Working Group (RWG), which was initially established under the Persian Gulf Veterans Coordinating Board (PGVCB) to coordinate federal research efforts. From 1995 through 1996, RWG identified 19 major research questions related to illnesses in Gulf War veterans. In 1996, the group added 2 more questions regarding cancer risk and mortality rates to create a set of 21 key research questions that serves as an overarching strategy in guiding federal research for Gulf War illnesses. (See app. II for the list of key questions.) The 21 research questions cover the extent of various health problems, exposures among the veteran population, and the difference in health problems between Gulf War veterans and control populations. In 1998, RWG expanded federal Gulf War illnesses research priorities to include treatment, longitudinal follow-up of illnesses, disease prevention, and improved hazard assessment; however, RWG did not add any new research questions. With regard to veterans’ health status, the research questions cover the prevalence among veterans and control populations of symptoms, symptom complexes, illnesses, altered immune function or host defense, birth defects, reproductive problems, sexual dysfunction, cancer, pulmonary symptoms, neuropsychological or neurological deficits, psychological symptoms or diagnoses, and mortality. With regard to exposure, the research questions cover Leishmania tropica (a type of parasite), petroleum, petroleum combustion products, specific occupational/environmental hazards (such as vaccines and chemical agents, pyridostigmine bromide (given to troops as a defense against nerve psychophysiological stressors (such as exposure to extremes of human suffering). Separately from these research efforts, DOD is responsible for investigating and reporting incidents of possible chemical and biological agent exposures and other potential occupational and environmental hazards. Within DOD, the entities responsible for overseeing Gulf War exposure investigations have also evolved over time. (See fig. 2.) In 2002, VA established RAC to provide advice to the Secretary of VA on proposed research relating to the health consequences of military service in the Gulf War. RAC, which is composed of members of the general public, including non-VA researchers and veterans’ advocates, was tasked with assisting VA in its research planning by exploring the entire body of Gulf War illnesses research, identifying gaps in the research, and proposing potential areas of future research. VA provides an annual budget of about $400,000 for RAC, which provides salaries for two full- time employees and one part-time employee and supports committee operating costs. RAC’s employees include a scientific director and support staff who review published scientific literature and federal research updates and collect information from scientists conducting relevant research. RAC’s staff provide research summaries for discussion and analysis to the advisory committee through monthly written reports and at regularly scheduled meetings. RAC holds public meetings several times a year at which scientists present published and unpublished findings from Gulf War illnesses research. In 2002, RAC published a report with recommendations to the Secretary of VA. It expects to publish another report soon. More than 80 percent of the 240 federally funded Gulf War illnesses research projects have been completed. In recent years, funding for this research has decreased, federal research priorities have expanded to incorporate the long-term health effects of all hazardous deployments, and interagency coordination of Gulf War illnesses research has waned. In addition, with respect to the federal research strategy, VA has not reassessed the research findings to determine whether the 21 key research questions have been answered or to identify the future direction of federal research in this area. In a separate but related effort, as of April 2003, all of DOD’s Gulf War investigations were complete. Since 1991, 240 federally funded research projects have been initiated by VA, DOD, and HHS to address the health concerns of individuals who served in the Gulf War. As of September 2003, 194 of the 240 federal Gulf War illnesses research projects (81 percent) had been completed; another 46 projects (19 percent) were ongoing. (See fig. 3.) From 1994 through 2003, VA, DOD, and HHS collectively spent a total of $247 million on Gulf War illnesses research. DOD has provided the most funding for Gulf War illnesses research, funding about 74 percent of all federal Gulf War illnesses research within this time frame. Figure 4 shows the comparative percentage of funding by these agencies for each fiscal year since 1994. After fiscal year 2000, overall funding for Gulf War illnesses research decreased. (See fig. 5.) Fiscal year 2003 research funding was about $20 million less than funding provided in fiscal year 2000. This overall decrease in federal funding was paralleled by a shift in federal research priorities, which expanded to include all hazardous deployments and shifted away from a specific focus on Gulf War illnesses. VA officials said that although Gulf War illnesses research continues, the agency is expanding the scope of its research to include the potential long-term health effects in troops who served in hazardous deployments other than the Gulf War. In October 2002, VA announced plans to commit up to $20 million for research into Gulf War illnesses and the health effects of other military deployments. Also in October 2002, VA issued a program announcement for research on the long-term health effects in veterans who served in the Gulf War or in other hazardous deployments, such as Afghanistan and Bosnia/Kosovo. As of April 2004, one new Gulf War illnesses research project was funded for $450,000 under this program announcement. Although DOD has historically provided the majority of funding for Gulf War illnesses research, DOD officials stated that their agency currently has no plans to fund new Gulf War illnesses research projects. Correspondingly, DOD has not funded any new Gulf War illnesses research in fiscal year 2004, except as reflected in modest supplements to complete existing projects and a new award pending for research using funding from a specific appropriation. DOD also did not include Gulf War illnesses research funding in its budget proposals for fiscal years 2005 and 2006. DOD officials stated that because the agency is primarily focused on the needs of the active duty soldier, its interest in funding Gulf War illnesses research was highest when a large number of Gulf War veterans remained on active duty after the war—some of whom might develop unexplained symptoms and syndromes that could affect their active duty status. Since 2000, DOD’s focus has shifted from research solely on Gulf War illnesses to research on medical issues of active duty troops in current or future military deployments. For example, in 2000 VA and DOD collaborated to develop the Millennium Cohort study, which is a prospective study evaluating the health of both deployed and nondeployed military personnel throughout their military careers and after leaving military service. The study began in October 2000 and was awarded $5.25 million through fiscal year 2002, with another $3 million in funding estimated for fiscal year 2003. VA’s coordination of federal Gulf War illnesses research has gradually lapsed. Starting in 1993, VA carried out its responsibility for coordinating all Gulf War health-related activities, including research, through interagency committees, which evolved over time to reflect changing needs and priorities. (See fig. 1.) In 2000, interagency coordination of Gulf War illnesses research was subsumed under the broader effort of coordination for research on all hazardous deployments. Consequently, Gulf War illnesses research was no longer a primary focus. The most recent interagency research subcommittee, which is under the Deployment Health Working Group (DHWG), has not met since August 2003, and as of April 2004, no additional meetings had been planned. Additionally, VA has not reassessed the extent to which the collective findings of completed Gulf War illnesses research projects have addressed the 21 key research questions developed by the RWG. (See app. II.) The only assessment of progress in answering these research questions was published in 2001, when findings from only about half of all funded Gulf War illnesses research were available. Moreover, the summary did not identify whether there were gaps in existing Gulf War illnesses research or promising areas for future research. No reassessment of these research questions has been undertaken to determine whether they remain valid, even though about 80 percent of federally funded Gulf War illnesses research projects now have been completed. In 2000, we reported that without such an assessment, many underlying questions about causes, course of development, and treatments for Gulf War illnesses may remain unanswered. As of April 2003, DOD had completed all of its Gulf War health-related investigations, which are separate from Gulf War illnesses research. DOD began conducting investigations on Gulf War operations and their implications for service members’ and veterans’ health in 1996. Generally, DOD instituted an investigation after it received a report of a possible exposure to a chemical or biological agent or some other environmental, chemical, or biological hazard. From 1996 to 2003, DOD conducted 50 investigations at a cost of about $68 million. DOD published the 50 investigations in the form of 20 case narratives, 10 information papers, 5 closeout reports, and 5 environmental exposure reports. Additionally, the RAND Corporation was contracted by the Office of Special Assistant for Gulf War Illnesses (OSAGWI) to publish 10 reports reviewing the medical and scientific literature on the known health effects of substances to which Gulf War veterans may have been exposed. Some investigations focused on examining possible exposures to chemical warfare agents or the presence of chemical weapons at specific sites. Other investigations studied the possible linkage between environmental hazards (such as contaminated water, equipment used during the Gulf War, oil well fires, and particulate matter) and illnesses or health effects. OSAGWI published four annual reports summarizing the results of investigations. Generally, these reports concluded that there were limited exposures by troops to some hazards and limited or no short- or long-term adverse effects expected from these exposures. The last annual report was published in December 2000. As of April 2004, federal agencies had funded seven research projects related to cancer incidence among Gulf War veterans, four of which have been completed. Published results from the completed and ongoing studies generally show that rates of cancer among Gulf War veterans were similar to or lower than the rates among nondeployed veterans or the general population. However, results of these studies may not be reliable due to limitations in research related to cancer incidence in Gulf War veterans. Future research efforts may also be hindered by inadequate federal data on the health characteristics of Gulf War veterans. Of the 240 federally funded research projects on Gulf War illnesses, VA officials stated that only 7 were related to cancer incidence in Gulf War veterans—accounting for about 3 percent of the entire research portfolio. Four of the seven research projects have been completed; the other three are ongoing. Only two of the seven research projects specifically studied cancer incidence. The remaining five research projects did not focus on cancer incidence, but instead included cancer as a component of a broader analysis of mortality, hospitalization, or general health status of Gulf War veterans. (See table 1 for more details on these studies.) Overall, the four published research projects found that deployed Gulf War veterans did not have demonstrable differences in cancer-related ailments compared with nondeployed veterans or the general population. In addition, one of the published studies found that rates of hospitalization among Gulf War veterans were similar or lower than among nondeployed veterans, and another found that cancer-related mortality rates among Gulf War veterans were similar or lower than in the general population. Research efforts are continuing for one of the two funded research projects specifically related to cancer incidence in Gulf War veterans. Researchers conducted a pilot project, scheduled to end in September 2004, which matched the cancer registries of six states and the District of Columbia with a database of all Gulf War veterans. In order to build on these efforts, the researchers plan to expand the pilot study to include additional states with cancer registries to obtain a more refined estimate of cancer incidence in Gulf War veterans. While this approach appears promising, the study’s principal investigator said further efforts beyond September 2004 would be limited to working with state cancer registries that do not charge a fee or do not require on-site use of a registry. A number of inherent limitations in research related to cancer incidence in Gulf War veterans could adversely affect the reliability of the findings from such research. (See table 1.) For example, since some cancers can take 15 years or more to develop and subsequently be detected, it may be too early to determine cancer incidence in Gulf War veterans, as studies 4 and 5 in table 1 were designed to do. Hospitalization studies of Gulf War veterans are applicable only to those veterans who seek care in specific hospitals included in the studies; veterans who use other health care systems are not included. Mortality studies of Gulf War veterans are limited because only veterans who have died of cancer are captured; other veterans who have not died, but have been diagnosed with cancer, are not included. Additionally, some general health studies of Gulf War veterans may use self-reported data only, which may not be accurate unless validated by objective physical or laboratory findings. Other research projects, which have samples that are not representative of all Gulf War veterans, such as studies 1 and 5 in table 1, may not reliably assess the possibility of elevated levels of cancer incidence or related ailments in Gulf War veterans when compared to the general population or nondeployed veterans. Research related to cancer incidence in Gulf War veterans may also be hampered by incomplete federal data on the health characteristics of Gulf War veterans. In 1998, we reported that VA and DOD did not have data systems providing complete information on the health characteristics of Gulf War veterans that could be used to accurately estimate cancer incidence. For example, data from medical records and files on disability claims, treatment, and pensions do not include all Gulf War veterans. These data do not account for veterans who are separated from the services and receive non-VA health care or disability benefits. Furthermore, linking VA and DOD data systems still would not overcome these shortcomings. VA officials have also stated that existing data systems, such as medical record or pension systems, are not adequate for determining cancer incidence and that epidemiological research projects are needed. RAC’s efforts to provide advice and make recommendations on Gulf War illnesses research may have been impeded by VA’s limited sharing of information on research initiatives and program planning as well as VA’s limited collaboration with the committee. However, VA and RAC are exploring ways to improve information sharing, including VA’s hiring of a senior scientist who would both guide the agency’s Gulf War illnesses research and serve as the agency’s liaison to provide routine updates to RAC. VA and RAC are also proposing changes to improve collaboration, including possible commitments from VA to seek input from RAC when developing research program announcements. At the time of our review, most of these proposed changes were in the planning stages. According to RAC officials, VA senior administrators’ poor information sharing and limited collaboration with the committee about Gulf War illnesses research initiatives and program planning may have hindered RAC’s ability to achieve its mission of providing research advice to the Secretary of VA. RAC is required by its charter to provide advice and make recommendations to the Secretary of VA on proposed research studies, research plans, and research strategies relating to the health consequences of service during the Gulf War. (See app. III for RAC’s charter.) RAC’s chairman and scientific director said that the recommendations and reports that the advisory committee provides to the Secretary of VA are based on its review of research projects and published and unpublished research findings related to Gulf War illnesses. Although RAC and VA established official channels of communication, VA did not always provide RAC with important information related to Gulf War illnesses research initiatives and program planning. In 2002, VA designated a liaison to work with RAC’s liaison in order to facilitate the transfer of information to the advisory committee about the agency’s Gulf War illnesses research strategies and studies. However, RAC officials stated that most communication occurred at their request; that is, the VA liaison and other VA staff were generally responsive to requests, but did not establish mechanisms to ensure that essential information about research program announcements or initiatives was automatically provided to the advisory committee. RAC officials cited the following instances in which VA did not fully collaborate with the advisory committee or provide information that RAC considered important: According to RAC’s scientific director, bimonthly teleconferences between the advisory committee’s and VA’s liaisons did not result in full disclosure of relevant ongoing research activities. For example, despite several months of discussions in which RAC’s liaison requested information about proposed research program announcements for Gulf War illnesses research, VA’s liaison did not inform RAC that VA’s Office of Research and Development was preparing a research program announcement until it was published in October 2002. Consequently, RAC officials said that they did not have an opportunity to carry out the committee’s responsibility of providing advice and making recommendations on research strategies and plans. RAC officials stated that VA did not notify advisory committee members that the Longitudinal Health Study of Gulf War Era Veterans—a study designed to address possible long-term health consequences of service in the Gulf War—had been developed and that the study’s survey was about to be sent to study participants. RAC officials expressed concern that VA did not inform the advisory committee about the survey even after the plans for it were made available for public comment. Although the survey had been finalized, the study’s principal investigator provided additional time to allow RAC to recommend additional survey question topics and incorporated RAC’s suggested changes into the survey. In May 2004, VA published its annual report that described the results, status, and priorities of federally funded Gulf War illnesses research as of 2002. However, RAC officials said they had not seen a draft of this report and had not been asked to review or comment on the document before it was published, even though the advisory committee has a responsibility to advise the Secretary of VA on the state and direction of Gulf War illnesses research. According to RAC officials, there were also instances in which information relevant to Gulf War illnesses research provided by VA’s liaison or other VA officials was unclear or incomplete. Miscommunication about the purpose of the October 2002 research program announcement and the details of a corresponding VA plan to increase funding up to $20 million for research related to hazardous military deployments, which would include the Gulf War, led RAC members to believe that VA had committed a large portion of this $20 million to Gulf War illnesses research for fiscal year 2004. Moreover, RAC officials did not receive routine reports on Gulf War illnesses research proposals that had been either received or funded by VA under the October 2002 research program announcement. RAC officials said that until VA administrators were asked to brief the advisory committee in February 2004, advisory committee members were unaware that only one new Gulf War illnesses research project had received funding for fiscal year 2004 under this program announcement and that no other proposals were under review. Information sharing about these types of issues is common practice among advisory committees of the National Institutes of Health (NIH), which has more federal advisory committees than any other executive branch agency. A senior official within NIH’s Office of Federal Advisory Committee Policy said that it is standard practice for NIH advisory committees to participate closely in the development of research program announcements. For example, some advisory committees’ members review preliminary drafts of announcements, and some discuss program announcements during regular committee meetings. Furthermore, this official stated that many NIH institutes require advisory committee approval before issuing research program announcements. In addition, NIH’s advisory committee members are routinely asked to make recommendations on both research concepts and priorities for research projects, and are kept up-to-date about the course of ongoing research projects. This official also stated that NIH advisory committee members often review draft reports summarizing research findings or research progress prior to their publication. Additionally, RAC officials stated that VA’s staffing choices for the liaison position and more recent VA staff turnover have hindered the development of working relationships and information flow. RAC officials stated that the initial VA liaison—a senior official in one of VA’s four research services—was not very knowledgeable about current Gulf War illnesses research developments. In early 2003, VA’s Chief Research and Development Officer (CRADO), whom RAC officials said was knowledgeable about Gulf War illnesses issues, began to serve as the VA liaison to RAC. (See fig. 6 for organizational chart.) However, this individual left VA in December 2003, and according to RAC officials, further communication with the advisory committee was delegated to lower-level VA staff. After the advisory committee’s February 2004 meeting, the acting CRADO (appointed in December 2003) and the deputy CRADO began to communicate regularly with the advisory committee. However, the acting CRADO has additional management responsibilities that can limit the amount of time available to coordinate with RAC. Specifically, in early April 2004, this official was named to temporarily head VA’s health care system—the Veterans Health Administration. For this reason, the deputy CRADO more often has acted as a point of contact for the committee. In recognition of RAC’s concerns, VA is proposing several actions to improve information sharing, including VA’s hiring of a senior scientist to guide its Gulf War illnesses research and improving formal channels of communication. In addition, VA and RAC are exploring methods to improve collaboration. These would include possible commitments from VA to seek input from RAC when developing research program announcements and to include RAC members in a portion of the selection process for funding Gulf War illnesses research projects. As of April 2004, most of the proposed changes were in the planning stages. Since the February 2004 RAC meeting, VA and RAC officials said they have had multiple meetings and phone conversations and have corresponded via e-mail in an attempt to improve communication and collaboration. VA officials said they have already instituted efforts to hire a senior scientist to guide the agency’s Gulf War illnesses research efforts. The official assigned to this position will be the RAC liaison and coordinator of VA’s research on Gulf War illnesses and health issues related to other hazardous deployments. According to VA officials, this official will be required to formally contact RAC officials weekly, with informal communications on an as needed basis. In addition, this official will be responsible for providing periodic information on the latest publications or projects related to Gulf War illnesses research. To facilitate collaboration with RAC, VA has proposed involving RAC members in developing VA program announcements designed to solicit research proposals, both specifically for Gulf War illnesses and related areas of interest, such as general research into unexplained illnesses. RAC officials stated that throughout March and April 2004, VA and RAC officials had been jointly developing a new research program announcement for Gulf War illnesses. In addition, VA has proposed that RAC will be able to recommend scientists for inclusion in the scientific merit review panels. VA also plans to involve RAC in reviews of project relevancy to Gulf War illnesses research goals and priorities after the research projects undergo scientific merit review. This could facilitate RAC’s ability to provide recommendations to the CRADO on the projects that it has judged to be relevant to the Gulf War illnesses research plan. While more than 80 percent of federally funded Gulf War illnesses research projects have been completed, little effort has been made to assess progress in answering the 21 key research questions or to identify the direction of future research in this area. Additionally, in light of decreasing federal funds and expanding federal research priorities, research specific to Gulf War illnesses is waning. Without a comprehensive reassessment of Gulf War illnesses research, underlying questions about the unexplained illnesses suffered by Gulf War veterans may remain unanswered. Since RAC’s establishment in January 2002, its efforts to provide the Secretary of VA with advice and recommendations may have been hampered by incomplete disclosure of VA’s Gulf War illnesses research activities. By limiting information sharing with RAC, VA has not fully realized the assistance that the scientists and veterans’ advocates who serve on RAC could provide in developing effective policies and guidance for Gulf War illnesses research. VA and RAC are exploring new approaches to improve information sharing and collaboration. If these approaches are implemented, RAC’s ability to play a pivotal role in helping VA reassess the direction of Gulf War illnesses research may be enhanced. However, most of these changes had not been formalized at the time of our review. With respect to the federal Gulf War illnesses research efforts, we recommend that the Secretary of Veterans Affairs take the following action: conduct a reassessment of the Gulf War illnesses research strategy to determine whether the 21 key research questions have been answered, whether they remain relevant, and whether there are promising areas for future research. To facilitate RAC’s ability to provide advice on Gulf War illnesses research, we recommend that the Secretary of Veterans Affairs take the following additional two actions: ensure that a liaison who is knowledgeable about Gulf War illnesses research is appointed to routinely share information with RAC and ensure that VA’s research offices collaborate with RAC on Gulf War illnesses research program development activities. We provided a draft of this report for comment to VA and DOD. In commenting on this draft, VA agreed with the report’s conclusions and concurred with the report’s recommendations. VA said that it has begun a preliminary assessment of the federal Gulf War illnesses research strategy, including an evaluation of the 21 key research questions, to ensure the research strategy’s continued validity and to identify promising areas for future research. The agency also noted that it has undertaken various steps, such as coordinating its most recent request for Gulf War research applications with RAC, in order to better collaborate with the advisory committee. VA’s written comments are in appendix IV. DOD informed us that it had no substantive comments on the draft report. Both VA and DOD provided technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of VA, the Secretary of Defense, and the Secretary of HHS. We will also provide copies to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7119 or Bonnie Anderson at (404) 679-1900. Karen Doran, John Oh, Danielle Organek, and Roseanne Price also made key contributions to this report. To describe the status of research and investigations on Gulf War illnesses, we reviewed reports to congressional committees outlining annually awarded and completed research projects and research funding. We summarized data from the Department of Veterans Affairs’ (VA) annual reports to congressional committees, including numbers of funded research projects and total funding by federal agency, in order to determine the status of completed research. We interviewed VA’s then- Assistant Chief Research and Development Officer (CRADO) and the Department of Defense’s (DOD) Deputy Director of the Deployment Health Support Directorate (DHSD) regarding the agencies’ current and future Gulf War illnesses research and investigation plans. We also interviewed CRADO and DHSD staff and senior managers with DOD’s medical research offices, including Defense Research and Engineering and the Army’s Medical Research and Materiel Command. We reviewed other relevant documents, including interagency coordinating council meeting minutes, federal laws, presidential directives, and VA- and DOD-published documents on Gulf War illnesses research and DOD investigations. To describe efforts made by VA and DOD to monitor cancer incidence among Gulf War veterans, we interviewed VA’s then-Assistant CRADO, a VA senior research manager, and VA researchers, along with DOD’s Deputy Director of DHSD. We reviewed annual reports to congressional committees describing federally funded Gulf War illnesses research, published articles from peer-reviewed journals reporting research findings, and other agency documents describing research projects. To evaluate the VA’s Research Advisory Committee on Gulf War Veterans’ Illnesses (RAC) efforts to provide advice on Gulf War illnesses research, we interviewed RAC’s Chairman and Scientific Director, attended the most recent RAC meeting in February 2004, and reviewed RAC reports and recommendations to the Secretary of VA. We also interviewed officials with the National Institutes of Health’s (NIH) Office of Federal Advisory Committee Policy and officials within an NIH advisory committee to identify common practices related to information sharing and collaboration between NIH and its advisory committees. To determine VA’s efforts to improve information sharing and collaboration with RAC, we interviewed VA’s deputy CRADO and CRADO staff. From 1995 through 1996, the Research Working Group (RWG) of the interagency Persian Gulf Veterans’ Coordinating Board identified 19 major research questions related to illnesses in Gulf War veterans. RWG later added 2 more questions to create a set of 21 key research questions to serve as a guide for federal research on Gulf War illnesses. (See table 3.)
More than a decade after the 1991 Persian Gulf War, there is continued interest in the federal response to the health concerns of Gulf War veterans. Gulf War veterans' reports of unexplained illnesses and possible exposures to various health hazards have prompted numerous federal research projects on Gulf War illnesses. This research has been funded primarily by the Department of Veterans Affairs (VA), the Department of Defense (DOD), and the Department of Health and Human Services. GAO is reporting on (1) the status of research and investigations on Gulf War illnesses, (2) the efforts that have been made by VA and DOD to monitor cancer incidence among Gulf War veterans, and (3) VA's communication and collaboration with the Research Advisory Committee on Gulf War Veterans' Illnesses (RAC). Most federally funded Gulf War illnesses research projects and investigations are complete, but VA--the agency with lead responsibility for coordination of Gulf War illnesses issues--has not yet analyzed the latest research findings to identify whether there are gaps in current research or to identify promising areas for future research. As of September 2003, about 80 percent of the 240 federally funded medical research projects for Gulf War illnesses had been completed. In recent years, VA and DOD funding for this research has decreased, federal research priorities have changed, and interagency coordination of Gulf War illnesses research has waned. In addition, VA has not reassessed the extent to which the collective findings of completed Gulf War illnesses research projects have addressed key research questions. The only assessment of progress in answering these research questions was published in 2001, when findings from only about half of all federally funded Gulf War illnesses research were available. Moreover, it did not identify whether there were gaps in existing Gulf War illnesses research or promising areas for future research. This lack of comprehensive analysis leaves VA at greater risk of failing to answer unresolved questions about causes, course of development, and treatments for Gulf War illnesses. In a separate effort, DOD has conducted 50 investigations since 1996 on potential hazardous exposures during the Gulf War. Generally, these investigations concluded that there were limited exposures by troops to some hazards and, at most, limited short- or long-term adverse effects expected from these exposures. As of April 2003, all investigations were complete. Federal agencies have funded seven research projects related to cancer incidence among Gulf War veterans. However, several limitations exist that affect research related to cancer incidence. For example, some cancers may take many years to develop and be detected. In addition, some research projects studying cancer incidence have not studied enough Gulf War veterans to reliably assess cancer incidence. Research may also be impeded by incomplete federal data on the health characteristics of Gulf War veterans. RAC's efforts to provide advice and make recommendations on Gulf War illnesses research to the Secretary of VA may have been hampered by VA senior administrators' incomplete or unclear information sharing and limited collaboration on research initiatives and program planning. VA and RAC are exploring ways to improve collaboration, including VA's hiring of a senior scientist who would both guide VA's Gulf War illnesses research and serve as the agency's liaison for routine updates to the advisory committee. However, most of these changes had not been finalized at the time of our review.
FFS Medicare generally pays providers directly for the services they perform—such as paying physicians for office visits—based on predetermined payment formulas. FFS payments are based on claims data received directly from providers. CMS relies primarily on prepayment automated checks and postpayment medical reviews to identify and recover FFS improper payments. Under the Improper Payments Information Act of 2002 (IPIA), as amended, CMS reported that the FFS improper payment rate was 11 percent for fiscal year 2016. Two-thirds of the FFS improper payment rate, according to CMS, was a result of insufficient documentation. CMS and its contractors engage in a number of activities to prevent, identify, and recover improper payments in FFS. The Patient Protection and Affordable Care Act of 2010 included provisions designed to strengthen Medicare’s provider enrollment and screening requirements. Subsequently, CMS implemented a revised screening process for new and existing providers and suppliers based on the potential risk of fraud, waste, and abuse. In November 2016, we evaluated this revised screening process and found that CMS used the new process to screen and revalidate over 2.4 million unique applications and existing enrollment records. As a result of this process, over 23,000 new applications were denied or rejected, and over 703,000 existing enrollment records were deactivated or revoked. CMS estimates that this process saved $2.4 billion in Medicare payments to ineligible providers and suppliers from March 2011 to May 2015. Also in FFS, CMS uses different types of contractors to conduct prepayment and postpayment reviews of Medicare claims at high risk for improper payments. We examined the review activities of these contractors and in April 2016 reported that using prepayment reviews to deny improper claims and prevent overpayments is consistent with CMS’s goal to pay claims correctly the first time. In addition, prepayment reviews can better protect Medicare funds because not all overpayments can be collected. We recommended that CMS seek legislation to allow Recovery Auditors, who are currently paid on a postpayment contingency basis from recovered payments, to conduct prepayment reviews. Although CMS did not concur with this recommendation, we continue to believe CMS should seek legislative authority to allow Recovery Auditors to conduct these reviews. Medicare Administrative Contractors (MACs) process Medicare claims, identify areas vulnerable to improper billing, and develop general education efforts focused on these areas. In March 2017, we evaluated MACs’ provider education efforts to help reduce improper billing. We found that CMS collects limited information about how the efforts focus on the areas MACs identify as vulnerable to improper billing, and recommended that CMS require MACs to report in sufficient detail to determine the extent to which their provider education efforts focus on vulnerable areas. According to CMS, the agency has updated its reporting guidance and MACs will begin reporting more detailed information beginning in July 2017. Whereas Medicare pays FFS providers for services provided, Medicare pays MAOs a fixed monthly amount per enrollee regardless of the services enrollees use. To identify and recover MA improper payments resulting from unsupported data submitted by MAOs for risk adjustment purposes, CMS conducts two types of RADV audits: national RADV activities and contract-level RADV audits. Both types determine whether the diagnosis codes submitted by MAOs are supported by a beneficiary’s medical record. CMS conducts national RADV activities annually to estimate the national IPIA improper payment rate for MA. For 2016, CMS estimated that 71 percent of the improper payments resulted from the insufficient medical record documentation MAOs submitted to CMS that did not support diagnoses they had previously submitted to CMS. The second type of RADV audit, contract-level audits, seeks to identify and recover improper payments from MAOs, and thus deter MAOs from submitting inaccurate diagnosis information. CMS conducted contract- level audits of 2007 payments for a sample of enrollees in 32 MA contracts. CMS’s goal is to conduct contract-level audits annually to recover improper payments efficiently, among other things. It plans to recoup overpayments by calculating a payment error rate for a sample of enrollees in each audited contract and extrapolating the error rate to estimate the total amount of improper payments made under the contract. CMS has RADV audits underway for three payment years—2011, 2012, and 2013. In general, CMS audits about 5 percent of contracts for each year, or roughly 30 contracts. CMS calculates a beneficiary’s risk score—a relative measure of projected Medicare spending—based on both demographic characteristics and health status (diagnoses). The agency uses Medicare data to determine a beneficiary’s demographic characteristics; however, it must rely on data submitted by MAOs for health status information. CMS requires MAOs to submit diagnosis codes for each beneficiary in a contract in order to calculate risk scores. Since 2004, CMS has used the Risk Adjustment Processing System (RAPS) to collect diagnosis information from MAOs. In 2012, CMS began requiring MAOs to submit encounter data. Such data include diagnosis and treatment information for all medical services and items provided to an enrollee, with a level of detail similar to FFS claims. Since 2015, CMS has used both RAPS and encounter data submitted by MAOs to risk adjust MA payments. When CMS proposed collecting encounter data in 2008, the agency stated it would use the data for risk adjustment and may also use them for specified additional payment and oversight purposes. CMS has recognized the importance of ensuring that the data collected are complete—representing all encounters for all enrollees—and accurate— representing a correct record of all encounters that occurred—given the important functions for which the data will be applied. In our 2016 report, we found several factors that hamper CMS’s recovery activities, including its failure to select contracts for audit that have the greatest potential for payment recovery, delays in conducting CMS’s first two RADV payment audits, and its lack of specific plans or a timetable for incorporating Recovery Audit Contractors (RACs) into the MA program to identify improper payments and help with their recovery. Our 2016 report found that the results from the RADV audits of 2007 payments indicated that the scores CMS calculates to identify contracts that are candidates for audit, called coding intensity scores, were not strongly correlated with the percentage of unsupported diagnoses. CMS defines coding intensity as the average change in the risk score component specifically associated with the reported diagnoses for the beneficiaries in each contract. Increases in coding intensity measure the extent to which the estimated medical needs of the beneficiaries in a contract increase from year to year; thus, contracts whose beneficiaries appear to be getting “sicker” at a relatively rapid rate, based on the information submitted to CMS, will have relatively high coding intensity scores. Figure 1 shows, for example, that CMS reported that the percentage of unsupported diagnoses among the high coding intensity contracts it audited (36 percent) was nearly identical to the percentage among the medium coding intensity contracts (35.7 percent). Our report also found that the RADV audits were not targeted to contracts with the highest potential for improper payments. We identified two reasons that the RADV audits were not targeted on the contracts with the greatest potential for recoveries. The first reason is that the coding intensity scores have shortcomings. For example, our report found that CMS’s calculation may be based on scores that are not comparable across contracts, because the years of data used for each contract may differ, and there are known year-to-year differences in coding intensity scores. In addition, CMS’s calculation does not distinguish between diagnoses likely coded by providers and diagnoses subsequently coded by MAOs. Medical records that providers create from diagnoses are apt to support the diagnoses better than diagnoses subsequently coded by the MAO through medical record review. CMS has a method available to it—the Encounter Data System—that will distinguish between the two diagnoses. Although using encounter data would help target the submitted diagnoses that may be most likely related to improper payments, CMS has not outlined plans for using it. Furthermore, CMS follows contracts that are renewed or consolidated under a different existing contract within the same MAO, but CMS’s coding intensity calculation does not incorporate prior risk scores from an earlier contract into the MAO’s renewed contract. This could result in an improper payment risk if MAOs move beneficiaries with higher risk scores, such as those with special needs, into one consolidated contract. The second reason audits are not targeted to the contracts with the greatest potential for recovery is that CMS does not always use the information available to it to select audit contracts with the highest potential for improper payments. CMS did not always target the contracts with the highest coding intensity scores, use results from prior contract- level RADV audits, account for contract consolidation, or account for contracts with high enrollment. For example, only four of the contracts selected for the 2011 RADV audit had coding intensity scores at the 90th percentile or above. Even though we found that coding intensity scores are not strongly correlated with diagnostic discrepancies, they are still somewhat correlated. Also, CMS’s 2011 contract selection methodology did not consider results from the agency’s prior RADV audits, potentially overlooking information indicating contracts with known improper payment risk. Finally, even though the potential dollar amount of improper payments to MAOs with high rates of unsupported diagnoses is likely greater when contract enrollment is large, CMS officials stated that the 2011 contract-level RADV audit contract selection did not account for contracts with high enrollment. We made two recommendations to address these issues: We recommended that (1) CMS improve the accuracy of coding intensity calculations, and (2) modify its processes for selecting contracts for RADV audit to focus on those most likely to have improper payments. In July 2017, CMS officials told us that the agency is working to implement these recommendations regarding the selection of contracts for audit. These officials said that CMS is reevaluating the design of the RADV audits to ensure its rigor in the context of all the payment error data acquired since the original design of the RADV audits, including an examination of whether coding intensity is the best criterion to use to select contracts for audit. Our 2016 report found that prior contract-level RADV audits have been ongoing for years, and CMS lacks an annual timetable to conduct and complete audits. CMS officials reported at that time that the current and previous contract-level RADV audits had been ongoing for several years. CMS has audits for payment years 2011, 2012, and 2013 underway. We concluded that this slow progress in completing audits conflicted with CMS’s goal of conducting contract-level RADV audits annually, and slowed recovery of improper payments. CMS lacked a timetable that would help the agency complete these contract-level audits annually. In this regard, CMS had not followed established project management principles, which call for developing an overall plan to meet strategic goals and to complete projects in a timely manner. In addition to the lack of a timetable, we found other factors that lengthened the time frame of the contract-level audit process. The sequential notification of MAOs that identify contracts selected for audit and then, sometimes months later, identify the beneficiaries under these contracts creates a time gap that hinders the agency from conducting annual audits. Technology problems with CMS’s system for receiving medical records are the main cause of the delay in completing CMS’s contract-level audits of 2011 payments. Additional technical issues with other systems led CMS to more than triple the medical record submission time frame for the 2011 audits. Our report found that disputes and appeals of contract-level RADV audits have also continued for years, and CMS has not incorporated measures to expedite the process. Nearly all of the MAOs whose contracts were included in the 2007 contract-level RADV audit cycle disputed at least one diagnosis finding following medical record review. CMS stated that MAOs disputed a total of 624 (4.3 percent) of the 14,388 audited diagnoses, and that the determinations on these disputes, which were submitted from March through May 2013, were not complete until July 2014. In addition, because the dispute process took a year and a half to complete, CMS officials stated that it did not receive all 2007 appeal requests for hearing officer review until August 2014. The hearing officer adjudicated or received a withdrawal request for 377 of the 624 appeals from August 2014 through September 2015. For the 2011 audit cycle, CMS officials stated that the medical record dispute process will be incorporated into the appeal process. Thus, MAOs can request reconsideration of medical record review determinations concurrent with the appeal of payment error calculations, rather than sequentially, as was the case for the 2007 cycle. While this change may help, the new process does not set time limits for when reconsideration decisions must be issued. Lack of explicit time frames for appeal decisions at reconsideration hinders CMS’s collection of improper payments because the agency cannot recover extrapolated overpayments until the MAO exhausts all levels of appeal, and the lack of time frames is inconsistent with established project management principles. We made two recommendations to address these issues: We recommended that CMS take steps to improve the timeliness of the RADV audit process. In July 2017, CMS officials told us that, as part of the agency’s efforts to consolidate program integrity initiatives into one center, the decision was made to transition RADV contract- level audits to the CMS Center for Program Integrity (CPI) at the end of 2016. With the transition, CMS is implementing a formal project management structure to facilitate the timeliness of the audit process. We also recommended that CMS require that reconsideration decisions be rendered within a specified number of days, similar to other time frames in the Medicare program. In July 2017, CMS officials told us that the agency is actively considering options for expediting the appeals process. Our 2016 report found that CMS had not expanded the RAC program to MA, as it was required to do by the end of 2010 by the Patient Protection and Affordable Care Act. Implementing an MA RAC would help CMS address the resource requirements of conducting contract-level audits. In 2014, CMS issued a request for proposals for an MA RAC, which would audit improper payments in three areas of MA, but CMS officials told us that CMS did not receive any proposals to do the work in those audit areas, and that its goal was to reissue the MA RAC solicitation in 2015. CMS reconsidered the audit work in the request for the MA RAC. In December 2015, CMS issued a request for information seeking industry comment on how an MA RAC could be incorporated into CMS’s existing contract-level RADV audit framework. In the request, CMS stated that it was seeking an MA RAC to help the agency expand the number of MA contracts subject to audit each year, and stated that its ultimate goal is to have all MA contracts subject to either a contract-level RADV audit or another audit that would focus on specific diagnoses determined to have a high probability of being erroneous. Officials from three Medicare FFS RACs all told us their organizations had the capacity and willingness to conduct contract-level RADV audits. We recommended that CMS develop specific plans for incorporating a RAC into the RADV program. In July 2016, CMS described to us its initial steps to meet this goal. In July 2017, CMS officials told us that the agency is evaluating its strategy for the MA RAC with CMS leadership. In July 2014, we recommended that CMS complete all the steps necessary to validate encounter data, including performing statistical analyses, reviewing medical records, and providing MAOs with summary reports on CMS’s findings, before using the data to risk adjust payments or for other intended purposes. In our 2017 report, we found that CMS had made limited progress toward validating encounter data. (See fig. 2.) As of January 2017, CMS had begun compiling basic statistics on the volume and consistency of data submissions and preparing automated summary reports for MAOs indicating the diagnosis information used for risk adjustment; however CMS had not yet taken other important steps identified in its Medicaid protocol, which we used for comparison. The steps CMS had not yet taken as of our January 2017 report are: Establish benchmarks for completeness and accuracy. This step would establish requirements for collecting and submitting MA encounter data. Without benchmarks, CMS does not have objective standards against which to hold MAOs accountable for complete and accurate data reporting. Conduct analyses to compare with established benchmarks. This would help ensure accuracy and completeness. Without such analyses, CMS has limited ability to detect potentially inaccurate or unreliable data. Determine sampling methodology for medical record review and obtain medical records. Medical record review would help ensure the accuracy of encounter data. Without these reviews, CMS cannot substantiate the information in MAO encounter data submissions and lacks evidence for determining the accuracy of encounter data. Summarize analyses to highlight individual MAO issues. This step would provide recommendations to MAOs for improving the completeness and accuracy of encounter data. Without actionable and specific recommendations from CMS, MAOs might not know how to improve their submissions. In July 2014, we also recommended that CMS establish specific plans and time frames for using the data for all intended purposes in addition to risk adjusting payments to MAOs. We found in our 2017 report that CMS had made progress in defining its objectives for using MA encounter data for risk adjustment and in communicating its plans and time frames to MAOs. CMS reported it plans to fully transition to using MA encounter data for risk adjustment purposes by 2020. However, even though CMS had formed general ideas of how it would use MA encounter data for purposes other than risk adjustment, as of January 2017 it had not specified plans and time frames for most of the additional purposes for which the data may be used. These other purposes include activities to support program integrity. In July 2017, CMS officials told us that the agency had not taken any further actions in response to our July 2014 recommendations. Because CMS is making payments that are based on data that have not been fully validated for completeness and accuracy, the soundness of billions of dollars in Medicare expenditures remains unsubstantiated. In addition, without planning for all of the authorized uses, the agency cannot be assured that the amount and types of data being collected are necessary and sufficient for specific purposes. Given CMS’s limited progress in planning and time frames for all authorized uses of the data, we continue to believe CMS should implement our July 2014 recommendations that CMS should establish specific plans for using MA encounter data and thoroughly assess data completeness and accuracy before using the data to risk adjust payments or for other purposes. In response to our 2014 recommendations, the Department of Health and Human Services did not specify a date by which CMS would develop plans for all authorized uses of the data and did not commit to completing data validation before using the data for risk adjustment in 2015. CMS began using encounter data for risk adjustment in 2015, although it had not completed activities to validate the data. In conclusion, Medicare remains inherently complex and susceptible to improper payments. Therefore, actions CMS takes to ensure the integrity of the MA program by identifying, reducing, and recovering improper payments would be critical to safeguarding federal funds. Chairman Buchanan, Ranking Member Lewis, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have. For questions about this statement, please contact James Cosgrove at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony include Martin T. Gahart (Assistant Director), Aubrey Naffis (Analyst-in-Charge), Manuel Buentello, Elizabeth T. Morrison, Jennifer Rudisill, and Jennifer Whitworth. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has designated Medicare as a high-risk program because of its size, complexity, and susceptibility to mismanagement and improper payments, which reached an estimated $60 billion in fiscal year 2016. CMS contracts with MAOs to provide services to about one-third of all Medicare beneficiaries, and paid MAOs about $200 billion for their care in 2016. CMS's payments to the MAOs vary based on the health status of beneficiaries. For example, an MAO receives a higher risk-adjusted payment for an enrollee with a diagnosis of diabetes than for an otherwise identical enrollee without this diagnosis. Improper payments in MA arise primarily from diagnosis information unsupported by medical records that leads CMS to increase its payments. This testimony is based on GAO's 2016 and 2017 reports addressing MA improper payments and highlights (1) factors that have hindered CMS's efforts to identify and recover improper payments through payment audits and (2) CMS's progress in validating encounter data for use in risk adjusting payments to MAOs. For these reports, GAO reviewed research and agency documents, analyzed data from ongoing RADV audits, and compared CMS's activities with the agency's protocol for validating Medicaid encounter data and federal internal control standards. GAO interviewed CMS officials for both reports, and also asked for updates on the status of GAO's prior recommendations for this statement. The Centers for Medicare & Medicaid Services (CMS) estimated that about $16 billion—nearly 10 percent—of Medicare Advantage (MA) payments in fiscal year 2016 were improper. To identify and recover MA improper payments, CMS conducts risk adjustment data validation (RADV) audits of prior payments. These audits determine whether the diagnosis data submitted by Medicare Advantage organizations (MAOs), which offer private plan alternatives to fee-for-service (FFS) Medicare, are supported by a beneficiary's medical record. CMS pays MAOs a predetermined monthly amount for each enrollee. CMS uses a process called risk adjustment to project each enrollee's health care costs using diagnosis data from MAOs and demographic data from Medicare. In its 2016 report, GAO found several factors impeded CMS's efforts to identify and recover improper payments, including: RADV audits were not targeted to contracts with the highest potential for improper payments. The agency's method of calculating improper payment risk for each contract, based on the diagnoses reported for the contract's beneficiaries, had shortcomings, and CMS did not use other available data to select the contracts with the greatest potential for improper payment recovery. Substantial delays in RADV audits in progress jeopardize CMS's goal of eventually conducting annual RADV audits. CMS had RADV audits underway for payment years 2011, 2012, and 2013. CMS had not expanded the use of Recovery Audit Contractors (RAC) to the MA program as required by law in 2010. RACs have been used in other Medicare programs to recover improper payments for a contingency fee. GAO recommended that CMS improve the accuracy of its methodology for identifying contracts with the greatest potential for improper payment recovery, modify the processes for selecting contracts to focus on those most likely to have improper payments, and improve the timeliness of the RADV audit process. CMS reported in July 2017 that it had taken initial actions to address these recommendations, but none had been fully implemented. GAO also recommended that CMS develop specific plans for incorporating a RAC into the RADV program. In July 2017, CMS reported that the agency is evaluating its strategy for the MA RAC with CMS leadership. CMS has begun to use encounter data, which are similar to FFS claims data, along with diagnosis data from MAOs to help ensure the proper use of federal funds by improving risk adjustment in the MA program. Encounter data include more information about the care and health status of MA beneficiaries than the data CMS uses now to risk adjust payments. In its January 2017 report, GAO found CMS had made progress in developing plans to use encounter data for risk adjustment. However, CMS had made limited progress in validating the completeness and accuracy of MA encounter data, as GAO recommended in 2014. GAO continues to believe that CMS should establish plans for using encounter data and thoroughly assess the data for completeness and accuracy before using it to risk adjust payments.
One of the DOD lessons learned from Iraq is to begin drawdown operations early because of their complexity. To that end, the Marine Corps began equipment drawdown operations for Afghanistan in August 2011, and the Army began in January 2012—both starting well before the January 2013 announcement that combat operations would end in December 2014. According to DOD officials, one reason these early actions to reduce equipment were warranted is that it takes longer to draw down equipment from Afghanistan than it does to reduce personnel levels. CENTCOM is the command responsible for drawdown operations in Afghanistan. However, the military services determine disposition for their equipment that is to be drawn down in Afghanistan, and TRANSCOM arranges transportation for the items that the services decide to return from Afghanistan. Figure 1 illustrates the equipment-disposition framework used to draw down Army and Marine Corps equipment from Afghanistan. Officials in the Army and the Marine Corps manage their respective services’ equipment throughout the equipment’s life cycle. Once CENTCOM officials determine that a piece of equipment is no longer needed in Afghanistan (see fig. 1, box 2), a request for disposition is sent to equipment managers in the United States, who decide whether to divest or retain the item. If an item is retained it will be returned from Afghanistan. If an item is divested it will be transferred or destroyed. These three general options are described as follows. Return: This is the shipment of equipment to a repair facility. Service officials forecast that the majority of equipment is likely to be returned from Afghanistan to service inventories. Equipment that is to be returned is delivered to one of the seven service-operated Redistribution Property Assistance Team (RPAT) yards or similar facilities situated throughout Afghanistan, where each piece is inspected and readied for transport. (See apps. II and III.) Transfer: This is the redistribution of equipment to either another U.S. agency or the government of another country. DOD has described this as a limited option due to the limited ability of the Afghanistan government to absorb and maintain large amounts of equipment. (See app. IV.) Destroy: This is the demilitarization of equipment at Defense Logistics Agency (DLA) Disposition Service disposition sites. The material may then be sold as scrap. If a serviceable item is to be destroyed, DOD requires certification that the item has been vetted through a service process and that all avenues for reutilization/transfer have been exhausted, or that a cost-benefit analysis was conducted and destruction found to be the most cost-effective option. (See app. V.) This review focuses on theater-provided equipment that will be repaired. There is also unit-owned equipment in theater that will be returned to units’ home stations. should be purchased at a lower cost; and the potential cost and benefit to be derived from repairing or refurbishing the item. Managers’ decisions may also be guided by service-wide decisions laid out in strategies specific to a particular equipment type, such as the Mine Resistant Ambush Protected vehicle. Equipment managers stated that these strategies and other factors result in published lists of equipment that are marked either for divesture (that is, transfer or destruction) or for retention in the service inventory. The Marine Corps publishes this information by equipment model in a “Ground Equipment Reset Playbook,” which integrates Marine Corps equipment-requirements data, equipment-reset strategies and on-hand inventories into a single-source document that is used by all levels of the organization in support of equipment drawdown. Corps equipment managers are guided by Marine Corps Order 4790.19, Depot Maintenance Policy, which calls for comparing the cost of repair against the cost of procuring the item. Unlike the Army guidance, which establishes a maintenance expenditure limit for a specific item, the Marine Corps order indicates that if the repair equals or exceeds 65 percent of the standard unit price or replacement cost for any piece of equipment, the equipment is not economical to repair. Marine Corps officials stated that these uneconomical-to-repair items should, ordinarily, be divested. Transportation costs are another factor that Army and Marine Corps equipment managers should generally consider when making equipment disposition decisions. Specifically, Army guidance includes transportation costs in the determination of whether it is economical to repair an item that is located overseas. Marine Corps guidance also indicates that equipment managers should consider the cost to return an item when making disposition decisions. Specifically, a Marine Corps order regarding equipment return indicates that an item normally should not be returned when the cost to return the item exceeds the cost to procure it new. be expensive due to the transportation costs of moving an item out of landlocked Afghanistan (see table 1). Moving a tactical vehicle back to the United States, for example, can range in cost from $0.73 per pound to more than $3.30 per pound. According to DLA Disposition Service officials, destruction of the same type of vehicle costs between $0.28 and $0.31 per pound. See Marine Corps Order 4440.31E, Marine Corps Retention and Excess Returns Policies for Wholesale and Retail Materiel Assets (June 23, 1989). For the purposes of this report we refer to uneconomical-to-return-and-repair items as items that are uneconomical to repair either because the sum of transportation and repair costs exceeds authorized limits (Army) or because either the transportation or the repair costs exceed authorized limits (Marine Corps). Pakistan Ground Lines of Communication (PAKGLOC) Northern Distribution Network (NDN)-Russia ªThe cost per vehicle is based on data furnished by U.S. Transportation Command (TRANSCOM) for the estimated cost of returning a heavy and a light vehicle from Kandahar Air Force Base, Afghanistan. Lowest cost reflects transport of a trailer. Highest cost reflects transport of a heavy (18.5-ton) tactical vehicle on the same route. To gain as much efficiency as possible in a drawdown, the military seeks a synchronized transportation process that links the drawdown of personnel, equipment, and materiel. A well-synchronized process can expedite movement out of Afghanistan and avoid backlogs at facilities and along transportation networks. Synchronization is characterized by timely and predictable airflow and seaflow, and by the ability to adjust transportation schedules. CENTCOM has issued instructions that guide units as they arrange for equipment to leave Afghanistan through a variety of air and surface routes (see fig. 2). On the ground, equipment moves through Pakistan, using the Pakistan Ground Lines of Communication (PAKGLOC) or through European and central Asian countries, as part of the Northern Distribution Network (NDN), to seaports from which it can be loaded onto ships for onward movement overseas. As we have previously reported, however, geopolitical complexities in the region make the use of these ground routes challenging for equipment return. There are also airlift and multimodal airlift (air and sea) options that fly equipment from Afghanistan to ports in the region, from which the equipment can be transported onward via ship. According to DOD, although airlift and multimodal airlift are the more expensive transportation options, they have to date been the most reliable in the equipment drawdown due to limitations associated with the ground routes (see app. III). DOD has made some progress in its drawdown of equipment from Afghanistan, but ongoing uncertainties about the future force in Afghanistan could affect future progress of the drawdown. Specifically, from October 2012 to October 2013, DOD returned from Afghanistan or destroyed 14,664 vehicles, or an average of 1,128 vehicles per month. Future progress toward drawdown goals will depend on equipment turn-in rates which, in turn, depend on having more information about the post- 2014 force level and mission. However, DOD’s future force levels and mission requirements beyond 2014 have not yet been announced. Moreover, from March 2013 to October 2013, the number of vehicles turned in by units for the drawdown averaged 55 percent of what had been forecasted. This is because some vehicles that had been forecast for turn-in were instead redistributed to other units in Afghanistan. A senior DOD official stated that units have retained equipment because of uncertainty regarding future operational needs in Afghanistan. Once the post-2014 force level and mission are announced, these vehicle turn-in rates may increase. From October 2012 to October 2013, DOD returned from Afghanistan or destroyed 14,664 vehicles, or an average of 1,128 vehicles per month. DOD officials reported that, as of October 2013, thousands of vehicles remained in Afghanistan. In March 2012, command officials in Afghanistan established monthly goals for reducing equipment in country. To meet their goals, DOD established capacities at DLA Disposition Services sites, at RPAT yards, and on transportation routes. For the purposes of this report, the term capacity refers to the infrastructure, resources, and personnel in place to return or destroy a specific number of vehicles or containers, or both, in a month. For example, in Afghanistan, DLA Disposition Services initially established a destruction capacity of 450 vehicles per month. The Army unit responsible for RPAT facilities in Afghanistan initially established a monthly processing capacity, and TRANSCOM also initially established a monthly capacity to move vehicles out of the country. Although DLA and TRANSCOM officials told us that they are postured to surge beyond current capacities if necessary, there has not been any reason to do so because the amount of equipment turned in has not required them to rely on surge capacity TRANSCOM officials also told us that building the capacity of the routes requires time and a steady increase in the amount of equipment required to be moved, since frequent use incentivizes contractors to maintain en- route infrastructure. In June 2013, DOD transitioned from its monthly reduction goals to new classified goals based on operational DOD officials may further adjust these classified drawdown milestones. goals as the post-December 2014 mission and size of the enduring force are clarified, and as a result the reduction of vehicles may accelerate. Equipment transfers to other countries or other U.S. agencies contribute to meeting the equipment-drawdown goals. To date, this disposition option has been limited. See app. IV. In June 2013, DOD produced detailed supporting plans that included phased operations as well as milestones and objectives for equipment reduction. The specific goals are classified; however, DOD officials in theater stated that in the period from June 2013 to September 2013, DOD’s Afghanistan equipment reductions exceeded these goals. Uncertainties about the future force in Afghanistan could affect the progress of the drawdown. Because the United States has not yet announced its post-2014 force level and mission in Afghanistan, the future equipment needs are still uncertain. A high-level Army official has stated that the goal is to draw down all equipment not needed by the enduring force from Afghanistan by October 2014. However, with the bilateral security agreement pending, the mission and size of the enduring force has not yet been finalized, and the date by which all equipment must be drawn down could change. DOD will need this information to determine the amounts and types of equipment that will remain in Afghanistan for the enduring presence and consequently the amount and types of vehicles that will be drawn down. These post-2014 uncertainties may affect the rate that vehicles are turned in by units in Afghanistan, affecting the progress of DOD’s drawdown. The current vehicle drawdown pace has been limited by lower-than- forecast quantities being turned in by units for drawdown. From March 2013 to October 2013, the number of vehicles turned in by units for the drawdown averaged 55 percent of what had been forecasted. In some instances, vehicles that had been forecast for reduction were redistributed to other units in Afghanistan instead of turned in for destruction or return. Commanders in Afghanistan must ensure that they have the equipment necessary to accomplish their mission and sometimes have found it necessary to retain equipment rather than release it. A senior DOD official stated that in some cases units have retained equipment because of uncertainty related to future operational needs in Afghanistan. Consequently, the flow of vehicles to be destroyed at DLA sites or returned via RPAT yards and transportation routes has been limited. These turn-in rates may increase once the post-2014 force level and mission are announced. DOD has taken some steps to improve efficiencies and manage costs in its Afghanistan drawdown processes. For example, CENTCOM amended its drawdown instruction to allow for aggregation of equipment at U.S. ports. According to DOD officials, this will allow for shipment of equipment via rail, resulting in potential savings when compared with trucking costs. However, as a result of ineffective internal controls, the Army and Marine Corps may be incurring unnecessary costs by returning equipment that potentially exceeds service needs or that is not economical to return and repair. DOD guidance on supply chain materiel management indicates that equipment exceeding certain service-approved quantities should not be retained unless economic or contingency reasons support its retention. We found that in a 12-month period the Army and Marine Corps returned more than 1,000 vehicles that exceeded their service-approved quantities, thereby incurring possible transportation costs of up to $107,400 per vehicle, depending on the type of vehicle, for equipment that might no longer be needed. Neither the Army nor the Marine Corps documented and reviewed justifications for returning these potentially unneeded items, although federal internal control standards state that documentation and review should be part of an organization’s management to provide reasonable assurance that operations are effective and efficient. DOD guidance also states that all costs associated with materiel management, including transportation costs, shall be considered in making best-value decisions throughout the DOD supply chain. While the services considered repair costs, they did not consider transportation costs in determining whether vehicles were economical to return and repair. Consequently, it is unclear how many vehicles the services may have returned that were uneconomical to return and repair. Without adequate internal controls to ensure that DOD is returning only equipment that is needed and is considering transportation costs in these decisions, the services are at risk for incurring unnecessary drawdown expenditures. DOD has taken some steps to improve efficiencies and manage costs in its drawdown processes, although DOD officials have stated that cost savings or efficiencies gained as a result of these changes have yet to be fully realized. In May 2013, CENTCOM amended its drawdown instruction to provide TRANSCOM with earlier notice of the amount and types of equipment requiring transportation. Combatant commanders are responsible for redeployment planning, and must consider synchronization—that is, linking the redeployment of personnel, equipment, and materiel in a timely manner. To move equipment out of the country as quickly and cost-effectively as possible, TRANSCOM needs advance knowledge about the amount of equipment to be moved. Acquiring this information is complicated by the fact that, while personnel movements out of Afghanistan can be known 180 days in advance, DOD officials have told us that transportation requirements for equipment are more difficult to identify as early because the equipment may still be needed in country. Prior to May 2013, it was difficult to obtain advance knowledge about the amount of equipment to be moved because TRANSCOM officials were notified that equipment was available for movement only when it arrived at an RPAT yard. To create more efficient transportation scheduling, in May 2013 CENTCOM amended its drawdown instruction to require notifying transportation officials almost 90 days before equipment is turned in, and to require subsequent confirmation of transportation needs 30 days before the arrival of equipment at the RPAT yard. Another step taken by CENTCOM was the amending of its drawdown instruction in May 2013 to allow for the aggregation of equipment at U.S. ports. According to TRANSCOM officials, this change may yield savings because aggregating equipment at U.S. ports will allow for the subsequent shipment of the equipment to final destinations via rail, resulting in savings of up to 45 percent over trucking the equipment to depots. For example, the cost to move three Mine Resistant Ambush Protected vehicles from an East Coast port to a West Coast depot by truck exceeds $20,300, while moving the same cargo by rail costs $11,700. According to TRANSCOM analysis, rail transport is four times more fuel efficient than truck transport and historically has resulted in savings of between 20 and 60 percent for shipments of large volumes of cargo. TRANSCOM officials expect that cost savings will accrue based on these changes to the guidance. In addition, TRANSCOM has reduced costs by more than doubling the number of vehicles returned through the PAKGLOC and less expensive air routes since June 2013. TRANSCOM estimates that this increased use of the PAKGLOC and less expensive air routes has resulted in a potential cost avoidance of about $55 million. Ineffective internal controls over the equipment disposition decision process allowed the Army and Marine Corps to return equipment from Afghanistan that may have been unneeded. The DOD instruction providing supply chain materiel-management policy states that all costs associated with materiel management shall be considered in making best-value decisions throughout the DOD supply chain. According to this guidance, best-value decisions include the consideration of both cost and noncost factors. The guidance further indicates that equipment exceeding approved acquisition objectives should not be retained unless economic or contingency reasons support its retention. The approved acquisition objective is defined as the quantity of an item authorized for peacetime and wartime requirements to equip and sustain U.S. and allied forces according to DOD policies and plans. Consequently, any item in the inventory that exceeds its approved acquisition objective is potentially unneeded. The term “vehicle” refers to rolling stock, such as tactical wheeled vehicles; multipurpose or special-purpose military wheeled platforms that transport personnel and all classes of supply; and powered and unpowered trailer systems. Vehicles can be specially designed for the military or can be commercial vehicles modified to meet certain military requirements. Vehicles are classified by weight, ranging from less than 2.5 tons to greater than 10 tons. See app. I for a description of the methodology supporting this analysis. end ($5.9 million), and larger, heavier vehicles transported through the most expensive routes at the high end ($111.1 million). According to Army and Marine Corps officials, items exceeding approved acquisition objectives may have been returned for a variety of reasons. For example, officials told us that an item may be returned because it is the newest model of a certain type of equipment, or because there may be an increased requirement for an item in the future. However, we could find no documentation that justified the return of these items. According to federal internal control standards, documentation and review should be part of an organization’s management to provide reasonable assurance that operations are effective and efficient. approved acquisition objectives without documenting and reviewing the justifications for doing so, the services are at risk for spending funds unnecessarily to retain items that may not be needed. GAO/AIMD-00-21.3.1. transport and repair items that otherwise might not be economical to retain. The Army has established procedures to determine whether an item is economical to return and repair. To ensure economic and operational effectiveness, Army officials use a preset maintenance expenditure limit to decide whether an item is economical to repair. For items located overseas, the Army specifies that transportation and handling costs be included in determining whether an item is economical to repair. When making the decision to return and repair items from Afghanistan, however, Army officials considered repair costs but omitted transportation costs as a factor for consideration, though it is unclear why these costs were omitted. It is unclear how Army officials could determine whether the return and repair of equipment was economical without including the transportation costs as a decision-making factor. It is also unclear how many items returned by the Army from Afghanistan would have been identified as uneconomical to return and repair if transportation costs had been included in disposition decision making. See Marine Corps Order 4790.19, Depot Maintenance Policy, para. 4.a(2)(g). their decision-making process, the services are at risk of allowing the return and repair of uneconomical-to-return-and-repair equipment. DOD officials have taken some positive steps to reduce costs and create efficiencies in the drawdown of equipment from Afghanistan. The services have established processes and have guidance that, if applied to decisions about the disposition of equipment in Afghanistan, can inform decision making about such equipment and ensure that decisions are well-reasoned, reviewable, and made on the basis of best value. However, by returning items without documentation and review of justifications and by not including all costs in the decision-making process, the Army and the Marine Corps are at risk of making unnecessary expenditures. As the equipment drawdown accelerates, potentially unneeded and uneconomical-to-return-and-repair items will compete with other equipment for transportation assets. With improved internal controls, the Afghanistan drawdown operations could be made more efficient and cost-effective. To reduce the risk of unnecessary expenditure of resources, we recommend that the Secretary of Defense direct the Secretary of the Army and the Commandant of the Marine Corps to take the following two actions: 1. ensure that justifications for returning items that exceed their approved acquisition objectives are documented and receive management review; and 2. ensure that transportation and all other relevant costs are included in disposition decision making. In written comments on the earlier FOUO version of this report, DOD concurred with both of the recommendations and stated that it will be taking steps to further improve service disposition decision-making processes. DOD also provided technical clarifications, which we incorporated, as appropriate. DOD’s written comments are reprinted in appendix VI. We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretary of the Army; and the Commandant of the Marine Corps. The report is also available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. To conduct this work, we interviewed and obtained documentation from cognizant officials from U.S. Central Command (CENTCOM), the combatant command responsible for operations in Afghanistan, as well as from the military services. To examine the status of the Department of Defense’s (DOD) efforts to reduce equipment in Afghanistan, we identified drawdown goals and key sources of documentation. Specifically, we identified and analyzed the processes and facilities that support CENTCOM in the execution of the drawdown and examined guidance from the Office of the Secretary of Defense, Joint Chiefs of Staff, CENTCOM, U.S. Transportation Command (TRANSCOM), military departments and services, and the Defense Logistics Agency (DLA). We then examined the metrics used by drawdown decision makers, reviewed Afghanistan drawdown statistics, and compared these statistics with DOD’s drawdown goals. To analyze the steps DOD has taken to create efficiencies and consider costs concerning the return of equipment from Afghanistan, we obtained and analyzed standard operating procedures and other guidance used to return equipment from Afghanistan and interviewed officials familiar with the transportation of equipment out of Afghanistan. To determine the extent to which the services have taken steps in their disposition decision- making process to consider costs related to the return of equipment, we reviewed equipment-management guidance and interviewed service equipment managers. We focused our review on vehicles because of the costs associated with transporting them, the number of vehicles that are candidates for disposition, and the ability to track vehicles by identification number. We also focused our review on theater-provided equipment or nonunit equipment that will be repaired. In addition, we focused our review to include only the Army and the Marine Corps since these services have owned the preponderance of equipment in Afghanistan. We examined the factors and data used by decision makers who issued disposition instructions, and analyzed the results of those decisions. Specifically, we examined service inventories and approved acquisition objectives—the amount of equipment needed by the services for peacetime and wartime requirements—used by equipment managers to make disposition decisions. To determine the service-approved acquisition objective for any specific item, we relied on the databases used by service officials to manage the relevant equipment. These data, along with the disposition decision and inventory data, were used to determine how many vehicles were returned that exceeded approved acquisition objectives. Specifically, to determine whether an item that exceeded its approved acquisition objective was returned from mid-March 2012 to mid-March 2013, we conducted one analysis for the Army and one for the Marine Corps. For the Army, its equipment-management officials determined which types of rolling stock exceeded service- approved acquisition objectives. We then took a list of these types of rolling stock that the Army personnel had identified and compared this list to data that the Army provided that contained the service’s disposition decisions regarding rolling stock or vehicles in Afghanistan from mid- March 2012 to mid-March 2013. For the Marine Corps analysis, equipment-management officials provided us with approved acquisition data and inventory data from fiscal year 2012 and fiscal year 2013. We then compared the inventory to the approved acquisition objective for both years to identify the types of vehicles that exceeded the approved acquisition objective. We took this list of the types of vehicles and then compared it to data that the Marine Corps provided that contained the service’s disposition decisions regarding rolling stock or vehicles in Afghanistan from mid-March 2012 to mid-March 2013. To determine the results of each service’s equipment disposition decisions, we obtained and reviewed data containing approximately 10,000 disposition decisions made by the Army and Marine Corps over a 12-month period. Of these decisions, roughly 87 percent resulted in the return of vehicles to the United States (the remaining 13 percent resulted in either the return of the vehicles to a site outside of the United States or the transfer or destruction of vehicles in country). Because our second objective is related to the return of equipment, we focused our analysis on these approximately 9,000 vehicle-return decisions. Moreover, we chose the 12-month period spanning mid-March 2012 to mid-March 2013 because by that time each service had several months’ experience in drawing down its equipment. We also examined the transportation costs of returning these vehicles. To estimate the costs of returning equipment, we used the rates TRANSCOM charges the military services for representative types of equipment. We surveyed the organizations responsible for maintaining the databases in order to determine the reliability of the data. From these efforts, we determined that the data were sufficiently reliable for determining how many vehicles were returned that exceeded approved acquisition objectives. To obtain additional information for our review, we contacted or interviewed officials from the Office of the Under Secretary of Defense for Acquisition, Technology CENTCOM; U.S. Forces-Afghanistan; TRANSCOM; Surface Deployment and Distribution Command; Headquarters, Department of the Army; U.S. Army Materiel Command; U.S. Army Sustainment Command; U.S. Army TACOM Life Cycle Management Command; U.S. Marine Corps/Installations and Logistics; U.S. Marine Corps Logistics Command; U.S. Marine Corps Systems Command; Headquarters, U.S. Navy; Headquarters, U.S. Air Force/Equipment Management Branch; and and Logistics; Joint Staff; Defense Logistics Agency (DLA) Disposition Services We conducted this performance audit from January 2013 to September 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Background The Army has stated that equipment will not be abandoned in Afghanistan. To assist units in processing equipment for return and arranging proper disposition, the Army’s 401st Army Field Support Brigade operates seven RPAT facilities. Army officials told us that each site has at least 20 acres and is equipped to prepare equipment to meet international and U.S. customs standards. RPAT yards process equipment that is excess to needs in theater and also handle equipment that is battle- damaged or declared as a battle loss. As seen below in figure 4, the Bagram and Khandahar yards account for more than 60 percent of the in-country RPAT capacity. Army Redistribution Property Assistance Teams (RPAT) operate facilities in Afghanistan to process equipment, including vehicles and non-rolling stock, for return to the United States. Units leaving Afghanistan bring equipment that is no longer needed in Afghanistan to an RPAT yard to be processed for return. After turn-in, it can take on average 29 days for non- rolling stock and 60 days for vehicles to be processed for shipment. As a result, there is about a 2-month delay from turn-in to the final processing of vehicles. Because all nonunit equipment must go through RPAT yard processing before shipment, efficient RPAT operations are critical for the transport of equipment out of Afghanistan. Further, to meet future drawdown demand, it is likely that RPAT processing throughput will have to increase from existing levels (see fig. 3). RPAT processing capacity may become a limiting factor for the Afghanistan drawdown. Army officials stated that RPAT facilities in Afghanistan have the capacity to process more than 1,000 vehicles per month. However, they have tested the upper limits of this capacity in only 3 months of a 20-month period (see fig. 3). Officials further stated that they have not reached their capacities due to the limited rate of equipment turn-in. Going forward, we note that simultaneous unit moves, base closures, and other events may result in turn-in levels that exceed the capacity of the facilities. Moreover, RPAT capacity is also a function of space availability and the efficiency with which equipment can be moved out of a yard. This efficiency depends on the ability of units to forecast what equipment will be turned in as well as the ability of the transportation system to move the equipment. From March 2012 to October 2013, average transportation wait times were 44 days for vehicles and 19 days for containers. In the future, RPAT output could be limited by the scheduling of transportation assets and the availability of lines of communication. Background In the drawdown from Iraq, the majority of equipment was processed and returned through a nearby port in Kuwait. Due to the landlocked geography of Afghanistan and the challenging political environment in Central Asia, TRANSCOM must rely on multiple routes and modes of transportation for the return of equipment from Afghanistan. U.S. Transportation Command (TRANSCOM) has established 18 surface, airlift, and multimodal airlift routes for the return of equipment from Afghanistan. The Pakistan Ground Lines of Communication (PAKGLOC), terminating at the port of Karachi, is the least expensive route out of Afghanistan. To the north, the Northern Distribution Network (NDN) broadly comprises three land routes terminating at several seaports: one through Central Asia into the South Caucasus; one through Uzbekistan, Kazakhstan, and Russia that ends in the Baltic states; and one through Tajikistan, Kyrgyzstan, Kazakhstan, and Russia that also ends in the Baltic states. These routes have limitations concerning the kinds of equipment that can be transported through them, and availability is subject to geopolitical complexities. For example, the PAKGLOC remained closed for 8 months from 2011 to 2012 and did not become operational until March 2013. Only about 4 percent of equipment return shipments flow through the NDN, but the NDN routes proved critical for inbound cargo during the PAKGLOC closure (see fig. 5). Airlift and multimodal movement (air and sea) reduce the amount of time needed to return equipment, but are substantially more expensive. Route limitations will affect cost and time required for equipment reduction. Transportation officials told us that when demands for return of equipment increase in late 2013 and early 2014, it is unlikely that the PAKGLOC will be able to fully support the demand. Officials stated that equipment return may continue for some time after December 2014 due to the volume of equipment remaining and the limited availability of assets and capabilities to move it. Officials also anticipate that the physical limits of transportation routes such as aerial port capacities could present a constraint in the drawdown. TRANSCOM officials cautioned that, in the future, it is possible that the volume of equipment to be returned could exceed the capacity of the system. Background There are two types of transfers of equipment in the Afghanistan drawdown: those associated with a base closure and those independent of a base closure. Subject to limitations, equipment in Afghanistan may be transferred within DOD and to other federal agencies, U.S. states, foreign countries, and other recipients. For transfers to Afghanistan through the Foreign Excess Personal Property program, DOD has issued guidance establishing who can approve the transfer of certain equipment by cost threshold. Transfers are coordinated with the U.S. Embassy team. Equipment in Afghanistan that U.S. Central Command (CENTCOM) determines to be no longer needed in the region and that the services and Department of Defense (DOD) want to divest is either destroyed or transferred to another U.S. agency, to U.S. states, to Afghanistan, to another country, or to certain organizations. A number of factors are used to determine whether equipment used in Afghanistan will be transferred, including whether the equipment is in good condition; the needs of the receiving organization or government; the ability of the recipient to sustain the equipment; and the need to maintain critical timelines for base closures and transfer actions. According to Army and Marine Corps forecasts, the amount of equipment in Afghanistan that will be divested (transferred or destroyed) is limited, and the majority will be returned (see fig. 6). The ability to transfer equipment minimizes the need for in-country destruction and could also avoid transportation charges when equipment is not returned to the United States. In fiscal year 2012, the Army estimated that transfers of excess equipment in Afghanistan resulted in the avoidance of at least $1.2 billion in return transportation costs. Transfer will be a limited option for the reduction of the equipment in Afghanistan. According to CENTCOM officials, they have sufficient authorities to transfer equipment to Afghanistan, surrounding countries, and coalition partners. However, while DOD has been able to transfer equipment such as heaters and air conditioners to Afghanistan, it faces challenges in transferring more advanced equipment due to Afghanistan’s limitations in absorbing and maintaining these items. Equipment used in Afghanistan can also be transferred to U.S. state and local government agencies, but these transfers may be limited by agencies’ having to arrange and fund the transportation of the equipment out of Afghanistan. In addition, Army officials said they may transfer some equipment once it has been returned to the United States. This transfer would occur after transportation costs have been paid through Overseas Contingency Operations funds. Background The functions of DLA Disposition Services include reutilizing end items, selling or donating surplus property, managing disposal of hazardous property, and coordinating the precious metals recovery program. DLA Central’s area of responsibility includes Afghanistan, Kuwait, and Iraq, as well as other Gulf States. Within Afghanistan, it operates facilities at Bagram, Kandahar, Camp Leatherneck, and Camp John Pratt, as well as at several other sites. Protocols are in place to prevent the destruction of serviceable items unless specifically authorized. If the services choose to divest an item in Afghanistan, it is either transferred or destroyed (demilitarized). The Defense Logistics Agency (DLA) Disposition Services has established procedures and facilities within Afghanistan to demilitarize the equipment. Demilitarization is the act of destroying the military offensive or defensive capabilities inherent in certain types of equipment or materiel. Demilitarization can be effected through mutilation, dumping at sea, scrapping, melting, burning, or alteration designed to prevent the further use of the equipment and materiel for its originally intended military or lethal purpose. Demilitarization applies equally to materiel in unserviceable or serviceable condition that has been screened and declared excess to Department of Defense (DOD) needs. DLA Disposition Services charges the coalition forces on a per pound basis for the services necessary to demilitarize an item and is currently charging about 28 to 31 cents per pound. The price is generally based on defraying the costs of overhead and operations, and it also considers that sale of resulting scrap material will yield a certain amount of revenue. Because this is a working capital fund operation, the military services pay a rate adjusted proportionally for the amount of the service they receive. Table 2 compares the price of demilitarizing listed items with the cost of transporting the items to the Gulf Coast of the United States. Potential Future Challenges Space could be a limiting factor at DLA Disposition Services demilitarization facilities in Afghanistan. A senior DLA Disposition Services official said it is conceivable that demand for demilitarization or destruction could exceed the capacity of facilities. To accommodate demand and flow, DLA Disposition Services has created some flexibility by reprogramming personnel from one task to another to match demands. A DLA Disposition Services official also acknowledged that demilitarization or destruction operations are likely to continue well after December 2014, noting that such operations in Iraq continued into 2013. In addition to the contact named above, individuals who made key contributions to this report include: Guy LoFaro, Assistant Director; Carolynn Cavanaugh; Carole Coffey; Timothy DiNapoli; Charles Johnson; Cale Jones; Larry Junek; Anne McDonough-Hughes; Carol Petersen; Terry Richardson; Michael Shaughnessy; Amie Steele; Jose Watkins; Cheryl Weissman; Amanda Weldon; and Steve Woods. Afghanistan: Key Oversight Issues. GAO-13-218SP. Washington, D.C.: February 11, 2013. Afghanistan Drawdown Preparations: DOD Decision Makers Need Additional Analyses to Determine Costs and Benefits of Returning Excess Equipment. GAO-13-185R. Washington, D.C.: December 19, 2012. Warfighter Support: DOD Has Made Progress, but Supply and Distribution Challenges Remain in Afghanistan. GAO-12-138. Washington, D.C.: October 7, 2011. Iraq Drawdown: Opportunities Exist to Improve Equipment Visibility, Contractor Demobilization, and Clarity of Post-2011 DOD Role. GAO-11-774. Washington, D.C.: September 16, 2011. Defense Logistics: DOD Needs to Take Additional Actions to Address Challenges in Supply Chain Management. GAO-11-569. Washington, D.C.: July 28, 2011. Warfighter Support: Preliminary Observations on DOD’s Progress and Challenges in Distributing Supplies and Equipment to Afghanistan. GAO-10-842T. Washington, D.C.: June 25, 2010. Operation Iraqi Freedom: Actions Needed to Facilitate the Efficient Drawdown of U.S. Forces and Equipment from Iraq. GAO-10-376. Washington, D.C.: April 19, 2010. Operation Iraqi Freedom: Preliminary Observations on DOD Planning for the Drawdown of U.S. Forces from Iraq. GAO-10-179. Washington, D.C.: November 2, 2009. Iraq and Afghanistan: Availability of Forces, Equipment, and Infrastructure Should Be Considered in Developing U.S. Strategy and Plans. GAO-09-380T. Washington, D.C.: February 12, 2009. Operation Iraqi Freedom: Actions Needed to Enhance DOD Planning for Reposturing of U.S. Forces from Iraq. GAO-08-930. Washington, D.C.: September 10, 2008.
DOD anticipates that the drawdown from Afghanistan will be more difficult than that from Iraq due to logistical challenges and the costs of transporting equipment out of landlocked Afghanistan. As of summer 2013, the Army and Marine Corps had substantial amounts of equipment in Afghanistan. The efficiency and effectiveness of equipment disposition decision making can directly affect the total cost of the drawdown. GAO was asked to review DOD's efforts to execute the drawdown in a cost-effective and efficient manner. GAO examined: (1) the status of DOD's efforts to draw down equipment from Afghanistan and (2) the extent to which DOD has taken steps to create efficiencies and consider costs concerning the return of equipment from Afghanistan. To evaluate these efforts, GAO reviewed documents and data containing approximately 10,000 disposition decisions made over a 12-month period, in addition to interviewing DOD officials in the United States and Afghanistan. The Department of Defense (DOD) has made some progress in its drawdown of equipment from Afghanistan, but ongoing uncertainties about the future force in Afghanistan could affect progress of the drawdown. Specifically, from October 2012 to October 2013, DOD returned from Afghanistan or destroyed 14,664 vehicles, an average of 1,128 vehicles per month. Future progress toward drawdown goals will depend on equipment turn-in rates, which, in turn, depend on having more information about the post-2014 force level and mission. In addition, over the course of the last 8 months of the above period, the number of vehicles turned in by units for the drawdown averaged 55 percent of what had been forecast. This is because some vehicles that had been forecast for turn-in were instead redistributed to other units in Afghanistan. A senior DOD official stated that units have retained equipment because of uncertainty regarding future operational needs in Afghanistan. Once the post-2014 force level and mission are announced, these vehicle turn-in rates may increase. DOD has taken some steps to improve efficiencies and manage costs in its Afghanistan drawdown processes. For example, U.S. Central Command amended its drawdown instruction to allow for aggregation of equipment at U.S. ports. According to DOD officials, this will allow for shipment of equipment via rail, resulting in potential savings when compared with trucking costs. However, due to ineffective internal controls, the Army and Marine Corps may be incurring unnecessary costs by returning equipment that potentially exceeds service needs or is not economical to return and repair. Specifically, GAO found the following: In a 12-month period, the Army and Marine Corps returned more than 1,000 potentially unneeded vehicles, thereby incurring estimated transportation costs of up to $107,400 per vehicle, depending on the type of vehicle. DOD guidance indicates that equipment exceeding certain service-approved quantities should not be retained unless economic or contingency reasons support its retention. However, neither the Army nor the Marine Corps documented and reviewed justifications for returning items exceeding these approved quantities. Federal internal control standards state that documentation and review should be part of an organization's management to provide reasonable assurance that operations are effective and efficient. The Army and Marine Corps may have returned vehicles that were uneconomical to return and repair because they did not consider transportation costs in making equipment-disposition decisions. DOD guidance states that all costs associated with materiel management, including transportation costs, shall be considered in making best-value decisions throughout the DOD supply chain. When all costs are not included in the decision-making process, there is risk of allowing the return and repair of uneconomical-to-return-and-repair equipment. This is a public version of a For Official Use Only (FOUO) report GAO issued previously, which omits FOUO information and data such as the schedule of drawdown efforts, numbers of vehicles returned, overall drawdown goals, and some cost information that DOD deemed FOUO. GAO recommends that DOD ensure that the Army and Marine Corps document and review justifications for the return of potentially unneeded items and that transportation costs and other relevant costs be included in disposition decision making. DOD concurred with GAO's recommendations.
In the early 1990s, NASA was planning an infrastructure to support a projected annual budget of more than $20 billion and a civil service workforce of about 25,000 by the turn of the century. However, over the last several years, NASA has been directed by the Administration to reduce its future years’ budget levels. In the fiscal year 1994 budget request, NASA’s total funding for fiscal years 1994 through 2000 was reduced by 18 percent, or $22 billion. In the fiscal year 1995 budget request, total funding was reduced again by almost $13 billion, or an additional 13 percent. To absorb these major reductions, NASA focused on adjusting programs. For example, the Space Station program was restructured and given an annual budget ceiling of $2.1 billion. Similarly, the scope of the Earth Observing System program was reduced, and the program is being restructured once again. Also, funding was terminated for the Space Exploration Initiative, the National Launch System, and the National Aerospace Plane, and the Comet Rendezvous and Asteroid Flyby project was canceled. As part of the executive branch’s development of NASA’s $14.2-billion budget request for fiscal year 1996, NASA was directed once again to lower its projected budget through fiscal year 2000, this time by an additional 5 percent, or $4.6 billion. Rather than terminating or delaying core science, aeronautics, or exploration programs, NASA announced it would absorb this funding decrease by reducing infrastructure, including closing and consolidating facilities. NASA also said it would reduce its use of support contractors and decrease the size of its workforce to about 17,500 by the turn of the century—the lowest level since the early 1960s. While NASA’s actual and planned budgets and staffing levels have decreased sharply, the value of its facilities infrastructure has actually increased. From fiscal years 1990 through 1995, the current replacement value of NASA’s facilities increased by about 14 percent, not including inflation. At the end of fiscal year 1995, the agency’s facilities had an estimated current replacement value of $17 billion. The agency owned or leased about 3,000 buildings on nearly 130,000 acres of land at 71 locations. NASA’s facilities range in size from small buildings to large industrial plants. Appendix I provides information about facilities at NASA’s 10 field centers. As part of its infrastructure reduction efforts, NASA is also looking at ways to cut the cost of its field center support activities. Each NASA field center operates a wide range of such activities, with some support-unique missions at particular centers. Common activities include building maintenance, fire protection, security, printing, and medical services. NASA provides these services primarily through a combination of civil service employees, contractor labor, and arrangements with the Department of Defense (DOD) where facilities are collocated. NASA’s current facility closure and consolidation plans will not fully achieve the agency’s goal of decreasing the current replacement value of its facilities by about 25 percent (about $4 billion in 1994 dollars) by the end of fiscal year 2000. More importantly, these plans will not result in substantial cost reductions by that date. By the end of fiscal year 1997, NASA plans to have closed or converted facilities to cost-reimbursable status that have a current replacement value of $1.9 billion. Also, as of March 1996, planned reductions through fiscal year 2000 were $2.8 billion, or about 30 percent below NASA’s goal of reducing the current replacement value of its facilities by about $4 billion in 1994 dollars. Agency officials noted that the $4-billion reduction goal was a “stretch,” or aggressive goal, which they were never certain could be achieved. Additional reductions are unlikely in research and development facilities, but there may be opportunities for further reductions in office space, according to NASA officials. NASA classifies building space based on its primary use, such as office space. NASA was providing general purpose office space for about 53,000 civil service and contractor personnel at the end of fiscal year 1995. Agencywide, the average square feet of office space available per person, including substandard space, exceeded NASA’s standard by nearly 43 percent and the ceiling by over 25 percent. When substandard space is not included, this average exceeded the standard by over 27 percent and the ceiling by 12 percent. Future reductions in the number of on-site contractor personnel and NASA employees (almost 4,000) by fiscal year 2000 will make even more office space available. NASA estimates that the planned $2.8-billion reduction in the current replacement value of facilities will yield only about $250 million in cost reductions through fiscal year 2000. Although some of these cost reductions are from lowering facilities’ operations and maintenance costs, most result from four centers bringing contractor personnel from off-site leased space onto the centers to fill space left vacant because of reductions in NASA personnel and support contractors. Moreover, some cost reductions may be offset by increased costs in future years. For example, according to a NASA official, about three-quarters of NASA facilities are 30 or more years old and keeping these facilities operational may lead to higher operations and modernization costs. NASA has had problems in identifying, assessing, or implementing some cost-reduction opportunities. NASA personnel (1) did not thoroughly evaluate potential larger cost-reduction options, (2) limited the scope of consideration for consolidation, (3) performed questionable initial cost-reduction studies, (4) made inappropriate closure recommendations, and (5) substantially overstated cost-reduction estimates. Some of these problems resulted when NASA acted quickly in an attempt to achieve near-term cost reductions. NASA officials said that some cost-reduction estimates were “interim” estimates because NASA was pressured into prematurely providing what turned out to be imprecise savings estimates to Congress. Also, some NASA staff lacked experience in developing estimates, according to NASA officials. Although there were problems with some evaluations, which are discussed below, others appear to have been done better. For example, the Office of Space Flight reviewed in detail a proposed consolidation of automated data processing functions at a single location before developing a plan that offered several options. Concerns about closing facilities, relocating activities, and consolidating operations have sometimes been exacerbated by perceptions of the lack of fairness and impartiality in the decision-making process. In the past, we have expressed concerns about NASA’s ability to accurately and independently develop cost estimates to support its decisions on new and ongoing programs and projects. Just recently, the NASA Inspector Generaland NASA management have been discussing the structure required to meet NASA’s continuing need for independent, impartial, and technically credible systems analysis and program evaluation. NASA did not thoroughly evaluate potential larger cost-reduction options for consolidating wide area telecommunications networks. NASA has five such networks operating or being developed, and they provide a variety of communications services among headquarters, field centers, major contractors, affiliated academic institutions, and international partners. Due to advances in technology, NASA no longer needs to operate multiple telecommunications networks, and consolidating network operations at a single site offers economies of scale, as well as reduced administrative overhead. Last year, NASA’s Zero Base Review team recommended that the field centers compete to determine which one could consolidate the five wide area networks most cost-effectively. Goddard Space Flight Center, Ames Research Center, and Marshall Space Flight Center prepared consolidation proposals. NASA’s Office of Space Communications, which oversees the two largest networks, decided against a competition and did not formally consider the proposals offered by Goddard and Ames. Instead, with the objective of obtaining some budget cost reductions in 1997, it endorsed the Marshall proposal without determining which of the three proposals was the most cost-effective. However, Marshall’s proposal did not project cost reductions in the near term as aggressively as the others. For example, Goddard’s proposal estimated potential cost reductions over the next 6 years totaling $94.5 million more than the reductions in the Marshall proposal. Earlier this year, we recommended that NASA conduct an objective review of network consolidation to determine whether its chosen approach should be modified to achieve greater cost reductions. NASA agreed with this recommendation and arranged for an independent group to conduct the review. It indicated the agency’s telecommunications experts were not participating in the review because they would have a “biased” perspective. The independent review is scheduled to be completed this month. NASA’s initial network consolidation efforts were hampered by the lack of clear direction within NASA to include all five wide area networks in the consolidation effort. The Office of Space Communications, which directed the consolidation effort at Marshall, does not have authority over three of the five networks. The agencywide telecommunications network consolidation or streamlining efforts did not have a strong central advocate. NASA’s Chief Information Officer, who would be a logical choice to fill this role, was not directly involved in this effort. NASA initially excluded almost 40 percent of its supercomputer systems, which were used mostly for research and development, from the scope of a supercomputer consolidation study. The agency uses supercomputers to support some space mission operations and a variety of research projects, including developing new supercomputer technologies. In March 1995, NASA began studying ways to cut its supercomputer costs by consolidating their management and operation. However, its initial studies considered only some of the agency’s supercomputers and focused on nonresearch and development supercomputing systems. Although NASA’s consolidation study team had identified 29 supercomputers, NASA management excluded 12 existing machines and some planned for future procurement from consideration because (1) some are being managed under existing contracts that could be affected by a consolidation decision and (2) others were used in research programs primarily to develop new supercomputer technologies. We spoke with NASA program, field center, and supercomputer consolidation study officials about the reasons for, and appropriateness of, limiting the scope of NASA’s consolidation study. During a series of discussions, NASA officials acknowledged our concerns about the study’s limitations and expanded its scope to a phased approach that will eventually consider all of the agency’s supercomputers. In commenting on a draft of this report, NASA said the review will be based on a top-down plan for agencywide management of supercomputing operations and will design an optimal supercomputer architecture as a basis for determining future directions in this area. Questions about initial studies have delayed the decision-making process in NASA’s attempts to consolidate aircraft. Last year, the transfer of research and operational support aircraft from five NASA centers to the Dryden Flight Research Center was proposed. NASA headquarters tasked Dryden, the center that would gain from the consolidation, with planning and performing an aircraft consolidation study. In a recent report, the NASA Inspector General noted the Dryden study had estimated NASA could save $12.6 million annually by consolidating aircraft at Dryden. However, internal and external questions about the scope and quality of this study have slowed the decision-making process. Subsequent reviews of the costs and benefits of aircraft consolidation by both NASA management and the NASA Inspector General staff have resulted in much lower annual savings estimates. In light of the controversy that potentially accompanies any significant decision to consolidate, relocate, or close facilities, NASA would benefit from ensuring an adequate balance of expertise and interests for study teams, developing initial analyses that are objective and well-supported, and fairly and thoroughly considering reasonable alternatives before making decisions. In this way, NASA can develop defensible decisions that will withstand external scrutiny and can be implemented in a timely manner. Plum Brook Station was inappropriately recommended for possible closure twice. In February 1995, the NASA Federal Laboratory Review recommended reviewing the station for possible closure because it was being operated primarily for non-NASA users. At about the same time, NASA’s White Paper, formally titled A Budget Reduction Strategy, suggested that Plum Brook should be closed. NASA officials could not provide the rationale for the proposed action. The Laboratory Review report did acknowledge a problem concerning the existence of an inactive nuclear reactor at the station, and the Zero Base Review subsequently recommended retaining Plum Brook on a fully reimbursable basis because of the reactor. Plum Brook operates on a cost-reimbursable basis, with most of its operating cost covered by revenue from users of four test facilities at the station. Even if all four of the test facilities were closed, the operating cost would still be about $2 million, primarily because the Nuclear Regulatory Commission requires that the reactor be maintained in its current state. The only way to close the location and dispose of the property would be to dismantle the reactor. However, the cost for doing this would be prohibitively expensive—about $100 million in 1997 dollars, according to a 1990 estimate. In addition, there are no disposal sites to accommodate the radioactive waste that would be generated by the dismantling process. In some cases, NASA’s initial estimates of cost reductions were overstated. For example, the Zero Base Review estimated that $500 million or more could be saved through 2000 by commercializing the Tracking and Data Relay Satellite System. However, NASA later determined this approach could not be implemented and that none of the projected savings would materialize in the time frame targeted by the Zero Base Review. Also, the Zero Base Review claimed that consolidation of telecommunications networks would save between $350 million and $375 million. Subsequently, NASA officials acknowledged these cost reductions would not only be significantly lower, but the lower savings estimates had already been considered in the preparation of the networks’ future budget estimates. The estimated savings noted above were part of the total savings estimate that provided the basis for NASA’s claim that the fiscal year 1996 out-year budget reductions could be covered by infrastructure decreases. To the extent the estimates were overstated, additional pressure was placed on NASA program and field center officials to find efficiencies to supplant the overstated savings. For example, after NASA determined that commercializing the Tracking and Data Relay Satellite System would not reduce costs, it began aggressively negotiating a fixed-price contract for the purchase of three additional satellites needed for the system. However, NASA estimates that the fixed-price contract produced considerably less savings than the commercializing of the system. NASA’s future facility disposition decisions could be affected by environmental cleanup costs. Therefore, information about the extent and type of contamination, the cost of its cleanup, and the party who is financially responsible are relevant to such decisions. However, NASA officials do not yet fully know what the cleanup requirements will be and lack a policy for identifying other responsible parties and sharing cleanup costs. Currently, NASA officials are still working to identify all the challenges they face as a result of environmental contamination. NASA’s 1996 site inventory identified over 900 potentially contaminated sites, about half of which may require cleanup. At this time, according to NASA records, only 72 sites are classified as closed and, of these, only 15 required cleanup. Most sites are still in the early stages of the cleanup process, with almost 400 still being studied to determine the type and extent of contamination. NASA headquarters used selected portions of a DOD model to develop a preliminary cost estimate of $1.5 billion for cleaning up potentially contaminated sites over a 20-year period. Subsequently, NASA’s field centers, in response to our request, developed cost estimates totaling $636 million. This estimate excludes some sites that have not been studied and is a projection of cleanup cost for only the next 8 years or less. Although NASA field centers have not developed cleanup cost estimates for disposing of property in the future, officials at several centers believed the cost could be as much as two to five times higher than if NASA were to retain the property. The higher cost would occur if NASA cleaned up facilities to meet more stringent standards that might be required for disposal. Sharing cleanup costs with others could help NASA reduce its environmental cleanup costs. Environmental law holds owners, operators, and other responsible parties liable for correcting past environmental contamination. However, NASA has no policy on pursuing other responsible parties. It currently pays the cleanup costs for virtually all of its centers and other field locations, regardless of who was responsible for causing or contributing to the contamination. Although NASA has identified other responsible federal agencies, it has not generally tried to identify potentially responsible contractors or previous owners and pursue cost-sharing agreements with them. An ongoing facility reduction effort where cost sharing may be an issue involves land at NASA’s Industrial Plant in Downey, California. The city wants to acquire 166 acres of this property: 68 acres NASA has identified as excess to its needs and 98 acres it has identified as potentially excess. The city plans to use the land for economic development projects. An assessment of environmental contamination determined that 16 of the excess acres were free of contamination. Studies of the remaining excess acreage are underway. The eventual disposition of the remaining 98 acres of NASA-owned land is still unclear, and studies of their contamination status are still in the early stages. Before NASA took over the Downey facility, it was a DOD facility operated by the predecessor organization of the contractor currently operating the facility for NASA. NASA will have to decide which potentially responsible parties it will pursue in supporting any corrective actions that may be needed to meet applicable cleanup standards. However, NASA’s Johnson Space Center, which manages the Downey facility, has not yet begun to deal with the potential cost-sharing issue and, as noted above, there is no NASA-wide policy providing guidance on this issue. In commenting on a draft of this report, NASA stated it intends to complete a policy statement by the end of 1996 to address the issue of potential responsible parties at NASA facilities requiring environmental remediation. Among NASA’s initiatives to reduce its infrastructure are efforts to lower the field centers’ operations support costs. NASA spends over $1 billion annually to support maintenance and operations at field centers. Among the actions NASA is taking to reduce this cost is consolidating its payroll functions at one center to cut payroll-related civil service and contractor staffing by about 50 percent. It is also implementing a variety of initiatives to share resources and standardize processes at its principal aeronautics centers—Ames, Langley, Lewis, and Dryden. NASA estimates that this effort—known as Project Reliance—will reduce agency costs by about $36 million by fiscal year 2000. In June 1995, NASA expanded the scope of its cost-reduction search outside the agency; it teamed with DOD to study how the two agencies could significantly reduce their operations costs and increase mission effectiveness and efficiency through increased cooperation and sharing. Study teams, referred to as integrated product teams, began work in September 1995 in seven areas. We monitored three teams: major facilities, space launch activities, and base/center support and services. The objectives of the major facilities and space launch activities teams included assessing facilities’ utilization and recommending potential consolidations and closures. The major facilities team was responsible for (1) developing recommendations on test and evaluation and research facilities with unnecessary overlap or redundancy and (2) identifying and providing the rationale for consolidations, realignments, and reductions for specific facilities. The space launch activities team focused on increasing cooperation in its area, including range and launch facilities and infrastructure. Neither team recommended specific consolidations or closures or identified cost reductions in their final briefings to the Aeronautics and Astronautics Coordinating Board. Both teams did, however, identify barriers to increased cooperation and coordination between NASA and DOD, including differences in cost accounting systems, practices, and standards. More importantly, NASA and DOD officials noted a more general limitation: the “old paradigm”—that is, each NASA and DOD program protects its ability to maintain its own technical expertise and competence. The over- capacity situation in large rocket test facilities helps to illustrate this. Several years ago, the National Facilities Study concluded that there was excess large rocket test capacity and some facilities could be closed, but DOD and NASA officials involved in the study said no direction or funding was subsequently made available to pursue this recommendation. More recently, the major facilities team found that NASA and DOD each have excess large rocket test capacity based on both current and projected workloads. However, the team made no recommendation to consolidate facilities because comparable facilities’ cost data was not available. The team did recommend that a facility agreement in the area of rocket propulsion testing be established to identify areas where capability reductions and greater reliance between NASA and DOD would be possible in the future. While the issue of large rocket test capacity remains unresolved, some rocket test facilities are currently undergoing or being considered for modification. A rocket test complex at Edwards Air Force Base is being upgraded by DOD at an estimated cost of $15 million to $17 million. In addition, NASA plans to upgrade one of its rocket engine test facilities at Stennis Space Center for about $45 million. DOD and NASA officials believe that their respective upgrades are cost-effective, although they agreed that the agencies need to improve coordination to prevent further excess capacity. NASA believes that the rocket test facilities at Stennis and Edwards Air Force Base are not comparable. However, the National Facilities Study and the major facilities integrated product team raised the overall excess capacity issue, and it has not yet been resolved. Independent actions by DOD and NASA to upgrade their individual facilities potentially exacerbate the problem of overall excess capacity. NASA and DOD officials acknowledged that recommending sharing and increasing reliance on each other, including consolidating or closing facilities, was difficult. These officials pointed out that, in many cases, such actions are “too politically sensitive” or could result in near-term costs increases, rather than cost reductions. They noted that an external, independent process, similar to the one used by the Defense Base Closure and Realignment Commission, may be needed to overcome the sensitivity and cost issues. The base/center support and services team, which was responsible for recommending ways to increase cooperation in base/center support and services, examined existing and potential cooperative arrangements at eight NASA centers and one test facility collocated with or geographically near DOD installations. The team reported finding over 500 existing support arrangements and identified additional cooperative opportunities. The team identified changes to activities at several NASA locations, including having NASA’s Dryden Flight Research Center and the Air Force Flight Test Center jointly use space and combine certain operations; constructing one fuel facility for joint use by NASA’s Langley Research Center and Langley Air Force Base; and sharing use of contracts and services. Although the team expects such changes to lower the agencies’ costs by millions of dollars, it cited specific barriers to accomplishing more. For example, different negotiated wage rates for support service contractors could be a barrier, since consolidations would likely require paying the higher rate, thereby substantially or totally offsetting consolidation cost reductions. In other cases, merging certain activities could complicate existing procurements in small and disadvantaged business set-aside programs. However, the team said that many more sharing arrangements are possible and should be included in follow-on studies. In developing the follow-on process, this team recommended and then provided guidance on designating lead offices, establishing and updating metrics and milestones, and sharing information. NASA and DOD officials indicated that the work started by the integrated product teams would continue. A joint DOD-NASA report, which could be released later this month, will recommend that six alliances be established to continue the work initiated by the major facilities team, according to a NASA official. Only two of the alliances have been organized. The official also stated that four panels of the Aeronautics and Astronautics Coordinating Board are to be established to oversee the follow-on activities. However, three of the panels have been delayed due to personnel reorganizations affecting both DOD and NASA, and it is uncertain when they will be initiated, according to the NASA official. The only panel to be established to date is the Aeronautics Panel, which met in July 1996. The details of the follow-on processes for continuing the work of the integrated product teams have not yet been fully developed. One measure of the relevance and success of these processes will be how they handle an issue such as overcapacity in large rocket test facilities. In commenting on a draft of this report, NASA said that the NASA-DOD National Rocket Propulsion Test Alliance will strive for joint management of facilities so they can be brought on or offline and investments controlled for maximum benefit. NASA also said this alliance “will examine indepth the current and future projected workloads to achieve proper asset management and utilization of rocket test facilities.” We recently reported that NASA does not yet have fully developed plans to reduce its personnel level by about 4,000 full-time equivalent employees to meet its overall goal of decreasing the size of its workforce to about 17,500 by fiscal year 2000. Also, it may not be able to do so without involuntarily separating employees. NASA projections show that voluntary attrition should meet the downsizing goal through fiscal year 1998, but will not provide sufficient losses by fiscal year 1999. Thus, NASA intends to start planning a reduction-in-force during fiscal year 1998, if enough NASA employees do not retire or resign voluntarily. NASA’s ability to reach its workforce reduction goal by the turn of the century is subject to major uncertainties, including the shifting of program management from headquarters to field centers and the award of a single prime contract for managing the space shuttle at Kennedy Space Center. We proposed that, in view of these uncertainties, Congress may wish to consider requiring NASA to submit a workforce restructuring plan for achieving its fiscal year 2000 personnel reduction goal. NASA estimates that civil service personnel reductions will save about $880 million from fiscal year 1996 through fiscal year 2000. NASA faces barriers to accomplishing additional consolidations and closures that it may not be able to overcome on its own. Closing facilities, relocating activities, and consolidating operations in fewer locations with fewer employees is not easy because of concerns about the effects of such actions on missions, personnel, and local communities. NASA and DOD officials have suggested that a process similar to the one used by the Defense Base Closure and Realignment Commission may ultimately be needed to adequately deal with the political sensitivity and cost issues that inevitably accompany consolidation and closure decisions. Given NASA’s limited progress to date, further opportunities to reduce infrastructure, and the agency’s lack of control over some barriers to further reductions, Congress may wish to adopt the idea of having such a process if NASA’s efforts fail to show significant progress in the near future in consolidating and closing facilities. To help determine the need for an independent process to facilitate closures and consolidations of NASA facilities, Congress may wish to consider requiring NASA to submit a plan outlining how it intends to meet its goals for a reduced infrastructure through fiscal year 2000. Such a plan should include estimated cost reductions resulting from specific facility closures and consolidations. In commenting on a draft of this report, NASA stated that it is committed to streamlining its workforce and supporting infrastructure and is continuing to make fundamental changes in the way it operates. NASA specifically noted that it intends to meet its fiscal and programmatic challenges through efficiencies, restructuring, privatization, commercialization, out-sourcing, and performance-based contracting. NASA commented on a number of areas discussed in the report, and it provided us with some additional or updated information and suggested changes to enhance the clarity and technical accuracy of the draft. We have incorporated the agency’s suggested changes in the final report where appropriate. NASA’s comments are reprinted in their entirety in appendix III, along with our final evaluation of them. Our scope and methodology is discussed in appendix IV. Unless you publicly announce this report’s contents, we plan no further distribution until 30 days from its issue date. At that time, we will send copies to other interested congressional committees, the Administrator of NASA, and the Director of the Office of Management and Budget. We will also provide copies to others upon request. Please contact me on (202) 512-4841, if you or your staff have any questions concerning this report. Major contributors are listed in appendix V. Net usable square feet (thousands) Dryden Flight Research Center is located on Edwards Air Force Base, California. The study’s purpose was to advise the NASA Administrator on the approaches the agency’s management could use to implement the U.S. space program in the coming decades. Of the 15 recommendations made, 2 related indirectly to facilities infrastructure. The study was completed in December 1990. At the direction of the NASA Administrator, the agency’s Deputy Administrator reviewed NASA’s roles and missions and suggested ways to implement the Augustine Committee’s recommendations. The recommendations focused on NASA field centers’ missions and project management approaches. Of the 33 recommendations, 9 were related indirectly to facilities infrastructure. The study was completed in November 1991. With some modification, the NASA Administrator approved all recommendations from the Roles and Missions Study and called for implementation plans from the center directors and headquarters program offices. The recommendations were approved in December 1991. This federal governmentwide review examined cabinet-level departments and 10 agencies, including NASA. One of the 19 recommendations that focused on NASA was directly related to facilities. The review was completed in September 1993. This document was issued in January 1994 by the Associate Administrator for Space Flight in response to the Administrator’s December 1991 call for implementation plans and the current Administrator’s renewed emphasis on roles and missions. It identified a number of recommendations to implement the roles and missions recommendations and assigned follow-up responsibilities. Of 38 recommendations, 15 related to specific facilities. The study was initiated in 1992 by the NASA Administrator to develop a comprehensive long-range plan to ensure that research, development, and operational facilities were world-class and to avoid duplication of facilities. The study group was composed of representatives from NASA; the Departments of Defense (DOD), Transportation, Energy, and Commerce; and the National Science Foundation. Almost 200 recommendations were made, including 68 specifically related to NASA facilities. The study was completed in April 1994. Contracted by NASA and DOD, the National Research Council reviewed the findings in the National Facilities Study to evaluate the requirements presented in the national facilities plan for space and research and development operations. The Board made 11 recommendations, 4 of which related to facilities in general. None of the recommendations related to specific facilities. The review was completed in 1994. The Aeronautics and Space Engineering Board conducted this review at NASA’s request. The study’s purpose was to independently examine projected requirements for, and approaches to, the provision of needed aeronautical ground test facilities. The Board made 13 recommendations; 2 related to specific NASA facilities. The review was completed in 1994. Federal Laboratory Review Conducted under the auspices of the NASA Advisory Council, this study was tasked to evaluate and develop recommendations for improving the efficiency and effectiveness of the federal research and development investment in the NASA laboratory system. The review was also to consider possibilities for restructuring, consolidating, closing, or reassigning facilities. The Laboratory Review made 74 recommendations and 3 suggestions related to specific facilities. The review was completed in February 1995. The White Paper, formally titled A Budget Reduction Strategy, was intended as a starting point for discussions on a proposed realignment of center roles and missions and reinvention in a constrained budget environment. The paper made about 40 recommendations total; 15 were related to facilities. The paper was issued February 1995. This review was a NASA-wide effort to allocate reductions in the fiscal year 1996 President’s budget, set center role assignments, provide suitable guidance for the fiscal year 1997 budget, and change the way NASA conducted business. About 50 recommendations were made, of which 2 applied to specific facilities. The review was completed in June 1995. NASA teamed with DOD to study how the two agencies could significantly reduce their investment and operations costs and increase mission effectiveness and efficiency through increased cooperation at all organizational levels. Study teams, referred to as integrated product teams, began work in September 1995 in seven areas. Each team addressed facilities, as appropriate, in its assigned functional area. Teams reported their recommendation to the Aeronautics and Astronautics Coordinating Board in April 1996. Additional information on this effort is presented in the body of this report. The following are GAO’s comments on NASA’s letter dated September 6, 1996. 1. The language of the report was modified where appropriate. 2. NASA provided information on activities and initiatives that occurred after the issuance of our report on Telecommunications Network: NASA Could Better Manage Its Planned Consolidation (GAO/AIMD-96-33, Apr. 9, 1996). 3. Our description of the current situation at Downey is in the context of a potential, not a known, cost-sharing issue. 4. NASA provided information on two rocket propulsion test facilities and stated that they are not comparable. However, we made no comparison of these facilities. We merely pointed out that, while the issue of potential excess in large rocket engine test capacity remains unresolved, efforts are underway or planned to upgrade such facilities. As noted in the report, such independent actions potentially worsen the problem. The overcapacity issue could benefit from a thorough, governmentwide assessment. 5. The report discusses the possible future need for a process similar to the one used by the Defense Base Closure and Realignment Commission. Such a process could be applied to individual facilities, groups of facilities, or entire agencies. There is no reason to believe that the process would be appropriate only for DOD or for numerous locations. We reviewed the value of NASA’s facilities and its budgets and staffing; facility reduction plans; real property reports; utilization data and reports; studies, including the National Facilities Study and the NASA Federal Laboratory Review; environmental law, policies, and procedures; and reports by the NASA Inspector General. We interviewed officials at NASA field centers and in the Offices of Management Systems and Facilities, Headquarters Operations, Space Flight, Space Communications, Human Resources and Education, Environmental Management Division, and Inspector General at NASA headquarters. To discuss NASA-DOD coordination efforts, we interviewed NASA and DOD officials. We also spoke with officials from Rockwell International, Space Systems Division, about plans for the NASA Industrial Plant, Downey, California. We obtained information from all NASA field centers, including information on the value and utilization of facilities, plans for closing facilities and estimated savings through fiscal year 2000, facilities project budgets, and cleanup and cost-sharing activities. We also spoke with officials from other federal agencies, including the General Services Administration and the Environmental Protection Agency. We obtained electronic versions of NASA’s real property and major facility inventory databases and NASA’s potentially contaminated site inventory database, but did not independently verify the reliability of the data in the databases. Because the National Facilities Study included aircraft in its work, we included them in our review. We conducted our audit work at NASA headquarters, Washington, D.C.; Ames Research Center, Moffett Field, California; Goddard Space Flight Center, Greenbelt, Maryland; Wallops Flight Facility, Wallops Island, Virginia; Jet Propulsion Laboratory, Pasadena, California; Lyndon B. Johnson Space Center, Houston, Texas; NASA Industrial Plant, Downey, California; White Sands Test Facility, Las Cruces, New Mexico; John F. Kennedy Space Center, Florida; Langley Research Center, Hampton, Virginia; Lewis Research Center, Cleveland, Ohio; Plum Brook Station, Sandusky, Ohio; George C. Marshall Space Flight Center, Huntsville, Alabama; Michoud Assembly Facility, New Orleans, Louisiana; Santa Susana Field Laboratory, California; John C. Stennis Space Center, Mississippi; Phillips Laboratory, Edwards Air Force Base, California; and Vandenberg Air Force Base, California. We conducted our work from June 1995 through August 1996 in accordance with generally accepted government auditing standards. Uldis Adamsons Frank Degnan Raymond H. Denmark, Jr. Sandra D. Gove William E. Petrick, Jr. Jamelyn A. Smith The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of the National Aeronautics and Space Administration's (NASA) efforts to achieve reductions and efficiencies in key areas of its infrastructure. GAO found that: (1) NASA plans for a $2.8-billion reduction in the current replacement value of its facilities will yield only about $250 million in cost reductions through fiscal year (FY) 2000; (2) NASA has experienced problems in assessing cost-reduction opportunities because it did not thoroughly evaluate cost-reduction options, excluded many systems in its review of ways to cut supercomputer costs, performed questionable initial studies for aircraft consolidation, made inappropriate closure recommendations, and overstated cost-reduction estimates; (3) although environmental cleanup costs could affect facility disposition efforts, NASA lacks a policy for identifying other responsible parties and sharing cleanup costs; (4) a joint effort between NASA and the Department of Defense to study potential operation cost reductions through increased cooperation and sharing yielded no specific recommendations for closures, consolidations, or cost reductions but did identify barriers to sharing and increasing interagency reliance; and (5) NASA ability to reach its workforce reduction goal by 2000 is subject to some major uncertainties, and NASA may need to plan a reduction in force if enough employees do not retire or resign voluntarily.
The Government Performance and Results Act (GPRA) is intended to shift the focus of government decisionmaking, management, and accountability from activities and processes to the results and outcomes achieved by federal programs. New and valuable information on the plans, goals, and strategies of federal agencies has been provided since federal agencies began implementing GPRA. Under GPRA, annual performance plans are to clearly inform the Congress and the public of the (1) annual performance goals for agencies’ major programs and activities, (2) measures that will be used to gauge performance, (3) strategies and resources required to achieve the performance goals, and (4) procedures that will be used to verify and validate performance information. These annual plans, issued soon after transmittal of the president’s budget, provide a direct linkage between an agency’s longer-term goals and mission and its day-to-day activities. Annual performance reports are to subsequently report on the degree to which performance goals were met. The issuance of the agencies’ performance reports, due by March 31, represents a new and potentially more substantive phase in the implementation of GPRA—the opportunity to assess federal agencies’ actual performance for the prior fiscal year and to consider what steps are needed to improve performance and reduce costs in the future. The Department of the Interior has jurisdiction over about 450 million acres of land—about one-fifth of the total U.S. landmass—and about 1.76 billion acres of the Outer Continental Shelf. Figure 1 shows the location of the majority of onshore lands under Interior’s jurisdiction. As the guardian of these resources, Interior is entrusted with preserving the nation’s most awe-inspiring landscapes, such as the Grand Canyon, Yosemite, and Denali national parks; significant historic places, such as Independence Hall and the Gettysburg battlefield; and such revered national icons as the Statue of Liberty and the Washington Monument. At the same time, Interior is to provide for the environmentally sound production of oil, gas, minerals, and other resources found on the nation’s public lands; honor the nation’s obligations to American Indians and native Alaskans; protect habitat for fish and wildlife; help manage water resources in western states; and provide scientific and technical information to allow for sound decision-making about resources. In fiscal year 2001, the Congress provided more than $10 billion to carry out these responsibilities. With these resources, Interior employs about 67,000 people in its major agencies and bureaus at over 4,000 sites around the country. This section discusses our analysis of Interior’s performance in achieving the selected key outcomes, as well as the strategies it has in place, particularly strategic human capital management and information technology strategies, for accomplishing these outcomes. In discussing these outcomes, we have also provided information drawn from our prior work on the extent to which Interior has provided assurance that the performance information it is reporting is credible. Interior’s progress in maintaining the health of federally managed land, water, and renewable resources cannot be judged. Interior has four annual goals that relate to this outcome, including restoring lands and maintaining healthy natural systems. We cannot judge progress because, as we reported last year, the goals associated with this outcome do not foster a broad or departmentwide approach to measuring progress. Rather, Interior’s overview contains representative goals from various agencies and goals for a few departmental crosscutting efforts. For example, Interior uses only two examples (South Florida ecosystem and wildland fire management) of its ongoing efforts to maintain ecosystems to measure progress toward its goal of maintaining healthy ecosystems, even though it is involved in several other efforts, such as restoring the Chesapeake Bay Watershed, the California Bay Delta, and the Lower Mississippi Delta. While the two examples it chose are both important efforts that represent important aspects of Interior’s land management activities, these two efforts are not sufficient to signify the progress Interior is making overall in maintaining healthy ecosystems because many other ecosystems are managed by Interior. Another reason that progress cannot be judged is that in some cases, Interior does not provide an overall goal for what is ultimately to be achieved. For example, Interior indicates that in fiscal year 2000 it exceeded its goal of restoring 237,800 acres that have been disturbed or damaged by previous uses, such as mining, farming, or timber harvesting, but does not provide any information on how many acres in total need to be restored. Although it does not have an overall target for its land restoration goal, to its credit, Interior has included such information in parts of its overview that are not related to the health of federal lands outcome. This example illustrates the type of information that needs to be included in the restoration goal. In reporting on its efforts to protect and recover species listed as threatened or endangered, Interior provides data on the total number of species that were listed a decade or more ago and those that are now improving or stable from that list. Interior’s strategies for achieving its fiscal year 2002 goals, including making adjustments to address two of three performance measures that it did not meet in fiscal year 2000, appear to be clear and reasonable. For example, to address its fiscal year 2002 goal to reclaim damaged lands, Interior has a strategy to partner with nonprofit organizations to implement restoration projects. This strategy is important because, according to Interior, agency staff are often unavailable to perform work to reclaim damaged lands because they are involved in damage assessment cases. As a result, partnering with nonprofit organizations will be necessary to accomplish this goal. In addition, Interior has increased the number of acres to be treated for buildup of fuel materials, such as dead trees and underbrush, to deal with its failure to meet its goal in fiscal year 2000 and has developed a strategy to accomplish the higher goal. Interior’s strategy is to incorporate fire management activities as part of its land management and to provide additional funding to enable the responsible agencies to treat increased acres. While it has identified partnering and additional funding as options, Interior plans to conduct workforce planning for all its agencies—with a particular emphasis on the wildland fire program—in fiscal year 2002. This action could identify other strategies to achieve Interior’s goals. Interior does not have a strategy for achieving the third measure that it did not meet in fiscal year 2000, and its plan does not discuss any actions to meet its unmet measure for fiscal years 1999 and 2000 to acquire lands for the South Florida ecosystem. The Park Service reported that it achieved the outcome of safely satisfying the expectations of visitors in national parks and educating these visitors on the relevance and importance of the park units they visit. The Park Service has three goals, visitor satisfaction, safety, and education, all of which it met or exceeded in fiscal year 2000. In fact, for one goal—visitors’ satisfaction with the services, facilities, and recreation and education opportunities offered during their visits to the parks—there is little room for improvement, as the agency has met its goal of 95 percent of visitors being satisfied. However, in the past we have reported concerns about the completeness of some of the data related to the goal that deals with visitor safety. Specifically, our past work revealed that there is no systematic process in place for reporting structural fires in national parks. Without such a process, there is no assurance that structural fires are being consistently reported as part of the agency’s visitor safety statistics. This is important since the agency manages over 16,000 permanent structures, of which about a third are historic. In response to our prior recommendations, the agency acknowledged in its report that its structural fire program has significant deficiencies that need to be corrected, including those involving reporting issues. The agency’s strategies for continuing to meet and exceed its visitor satisfaction and visitor education goals appear clear and reasonable, although the plan does not provide information on the human capital aspects of the strategies. The Park Service plans to continue to manage facilities for visitors and to provide many different services for them, including interpretive programs and concessions. However, the Park Service’s strategies do not explicitly address its workforce needs to ensure the goals are met. One potential problem that the Park Service leadership has identified is that 68 percent of its concessions staff are eligible to retire in the next 5 years. In looking for new staff, the Park Service can take the opportunity to address problems we have found with the concessions contracting staff. In contrast to the visitor satisfaction and education strategies, the agency’s strategy for achieving its visitor safety goal is vague. The Park Service says it is developing a strategic plan and a new policy for visitor safety, but this is the same strategy identified in last year’s plan. The current plan does not contain sufficient information on what the Park Service will do for us to be able to assess the merits of the strategy. We cannot judge whether the Bureau of Indian Affairs (BIA) is making progress in protecting and preserving Indian trust lands and resources because the annual goals and performance measures it has established that relate to this outcome are output-related and therefore these measures do not assess progress toward the outcome. BIA has 18 goals, such as reforesting tribal lands and restoring wetlands on tribal lands, and the same number of performance measures for this outcome. One example of BIA’s output-related goals is to provide support for 50 tribal fish hatchery maintenance projects; the related performance measure is the number of projects supported. Such a goal does not show progress toward improving the resource. BIA recognizes the importance of developing goals that measure results and is attempting to establish them. According to the agency, the primary obstacle to the establishment of outcome goals for protecting and preserving Indian trust lands and resources is the lack of readily available data for measuring results. In other parts of its performance plan dealing with different outcomes, BIA has developed useful outcome-oriented goals. For example, its goal for law enforcement is to reduce violent crime on Indian lands and one of its long-term goals for community development is to reduce unemployment on Indian lands. Because BIA’s goals related to the outcome we reviewed are output goals rather than outcome goals, they are more straightforward and more easily attainable. For example, the strategy for the reforestation goal focuses on planting more trees. Thus, the strategies for achieving BIA’s goals are clear and reasonable. As the agency moves toward establishing outcome-related goals, as it has indicated it plans to do, it will need to develop new strategies that reflect the outcomes. The Minerals Management Service (MMS) reported that it is making progress toward ensuring that safe and environmentally sound mineral development occurs on the Outer Continental Shelf and that the public receives fair market value for it. But MMS also reported weaknesses related to data accessibility and reliability that it is working to correct. MMS has three performance goals (environmentally sound development, fair market value, and safety) and four performance measures related to this outcome. For fiscal year 2000, MMS achieved two goals, but did not achieve a third. MMS believes that a significant reason the third goal, safety, was not achieved is that offshore oil rig operators provided more accurate data on property damage costs, which were previously underestimated. MMS measures its performance for this outcome through two indexes and two ratios, which it calculates using data from various sources, including its own data systems and models, operators, and other agencies. MMS continues to reevaluate its performance measures because it recognizes that data collection and verification problems affect them. For example, MMS changed its performance goal to no more than 10 barrels spilled per million barrels produced (previously about 6 barrels) because this is a more realistic goal based on historical data. MMS also realized it could not, at this time, obtain accurate water quality data, which are needed as a component of the environmental index. As a result, MMS eliminated that component from its calculation until reliable data can be obtained. The MMS plan contains clear discussions of several strategies that appear to be reasonable approaches to maintaining performance and improving data quality. For example, MMS is developing a new environmental index that will focus on MMS-permitted activities, which it believes will alleviate some of its data collection problems. Other strategies include conducting more inspections of platforms to ensure operators observe safety procedures, working with the Department of Transportation to facilitate industry compliance and MMS enforcement, participating in development of industry safety standards, developing a risk-based inspection program, and improving data quality through revised regulations covering accident reporting by operators and sharing information with other nations. MMS has requested additional funding for some of these activities but does not provide information on whether these funds will be used for hiring new staff or training current staff. For the selected key outcomes, this section describes major improvements and remaining weaknesses in Interior’s (1) fiscal year 2000 performance reports in comparison with its fiscal year 1999 reports and (2) fiscal year 2002 performance plans in comparison with its fiscal year 2001 plans. It also discusses the degree to which Interior’s fiscal year 2000 reports and fiscal year 2002 plans address concerns and recommendations by the Congress, GAO, the Inspector General, and others. The fiscal year 2000 reports include more thorough discussions of data validation and verification issues than the previous year’s reports. On the other hand, Interior can improve the reporting of goals that are dropped or revised and can discuss the impact of actual performance on the likelihood of achieving planned performance in the current year. The fiscal year 2002 plans contain more appropriate explanations of goals, measures, and crosscutting issues than the previous year’s plans. Still, Interior can improve strategy sections to reflect strategic human capital and information technology plans and can improve discussions of program evaluations and the effects on performance goals and measures. Interior’s fiscal year 2000 reports are largely similar to last year’s reports, although Interior and its agencies continue to make improvements in some areas. Overall, each of the 10 reports is well-organized and useful; in particular, the “Goals-at-a-Glance” section of each report and plan provides a useful way to follow the progress from year to year. The reports generally provide excellent narrative to explain those situations in which the actual performance significantly deviated from the performance goals. The discussions are brief, focused, and to the point, and they provide the reader with useful information in tracking the agency’s performance. One significant improvement this year is Interior’s and the individual agencies’ attention to data validation and verification issues, an area that we have highlighted as needing improvement in our reviews of prior year performance reports and plans. In most cases, the agencies included thorough discussions of how they determined that their goals and measures are valid and accurate. For example, BIA’s report includes a comprehensive discussion of data validation and verification issues that generally provides the reader a good understanding of the credibility of the data, the data shortcomings, and the actions planned to improve the data. In one case, for example, BIA reported it will use a checklist to document trust evaluations to help verify the number it performs. Interior can improve its future reports in two ways. First, in its overview document, Interior can improve its reporting of goals that have been revised in previous year’s documents. For example, in the fiscal year 2000 plan, Interior had a goal to track progress in restoring lands in the Pacific Northwest; this was moved to a different section in the fiscal year 2000 report. In most instances, an explanation was provided when a new goal was added, but the absence of an explanation in this case confused efforts to track from one year to the next. Interior also revised the section of the report dealing with the restoration of the South Florida ecosystem to reflect the goals contained in the strategic plan issued by the South Florida Ecosystem Restoration Task Force in July 2000 in response to our recommendation. Interior plans to report on the results being achieved by the task force in restoring the ecosystem. In addition to reporting the results, it is important that Interior’s report reflect its contributions to the effort. Second, Interior can report on the effect of actual performance on the likelihood of achieving planned performance in the current year. For example, Interior indicated that it will not change its fiscal year 2001 measure for reclaiming damaged lands, even though it did not meet its fiscal year 2000 measure. It did not, however, state explicitly whether it could achieve the fiscal year 2001 measure. As with its performance reports, Interior’s performance plans continue to improve. In most cases, the plans contain appropriate explanations of the goals and measures. For example, last year we observed that the MMS plan did not include an explanation of the agency’s accident index—now called the safety index. This year’s plan has a clear, sufficient explanation of the index, including examples of data used to calculate the index components: the severity factor and safety risk factor. During fiscal year 2000, MMS had planned to establish a more comprehensive safety index and a new baseline for use in 2001, but it explained in its plan that additional time will be needed to develop a valid baseline. Overall, Interior’s and the agencies’ plans also contain useful explanations of management and crosscutting issues. For example, BIA provided greater explanation of its crosscutting issues, which is significant because practically all of its functions are associated to some degree with one federal agency or another. To further improve the plans, Interior can integrate discussions of strategic human capital management issues and technology improvements with their strategies for achieving performance goals. In some cases, these issues have already been included in discussions of strategies—for example, Interior identified a lack of personnel and the need to work with nonprofit organizations to achieve its land reclamation goals. Also, MMS identified a general need to use information technology to improve its efficiency. In other cases, such as potential succession planning difficulties in the Park Service concessions program, these issues are not yet part of the discussion of strategies. Furthermore, Interior and its agencies can continue to make program evaluations a more integral part of the plans in future years because the results of the newly developed program evaluations can lead to changes in programs and performance goals and measures. For example, MMS used program evaluations, including two fiscal year 2000 Inspector General audits and the annual Inspector General financial management reviews, to measure past performance and used other studies of information and data validity, including an environmental monitoring study of industry compliance, to establish performance measures. The Park Service, however, missed the opportunity to improve upon its last year’s plan by not fully disclosing the data limitations we had identified for its safety goal and by not identifying specific steps to address the limitations. GAO has identified two governmentwide high-risk areas: strategic human capital management and information security. Interior reported some progress in resolving the strategic human capital challenge in fiscal year 2000, having completed workforce planning guidelines in June 2000. Also, Interior and its agencies identified some areas in which they have human capital concerns. For example, Interior indicated that staffing levels could limit the effectiveness of the wildfire program and set a goal to conduct workforce planning in the wildland fire program in fiscal year 2002. Interior indicated that it will undertake human capital planning for all its agencies in fiscal year 2002 and is accelerating its planning to respond to an Office of Management and Budget (OMB) bulletin instructing federal agencies to conduct a workforce analysis by June 29, 2001, but it did not substantially address how it intends to use human capital to achieve its goals. Interior did not have fiscal year 2000 goals to address progress in the information security area and, as a result, did not discuss progress in this area. Interior did include a goal and related measures for fiscal year 2002 to achieve specific improvements in this area; however, it did not indicate what steps it would take to effectively correct previously identified security weaknesses. In addition, GAO has identified four other major management challenges facing Interior. Interior’s performance reports included goals and measures for three of these challenges—improving the management of the Park Service, Indian trust funds, and ecosystem restoration efforts—some of which the agency met or exceeded, and others it did not meet. For the remaining challenge—improving the management of expanding amounts of land for which it is responsible—Interior did not have performance goals or measures. Interior and the relevant agency, the Bureau of Land Management, did discuss the strategies the agency would use to meet this challenge, including improving program management with a national-level task force. Appendix I discusses the responses of Interior and its agencies to the management challenges identified by both GAO and Interior’s Inspector General. Interior has 10 reports and plans, 9 for its individual agencies and 1 departmental overview. For three of the selected key outcomes—park visitation, Indian trust assets, and Outer Continental Shelf development— we reviewed the agency-level reports and plans to assess progress toward the outcomes. The outcome on federal land management is broad, however, with multiple agencies performing the work and contributing to the goals. For this reason, we used Interior’s overview report to assess progress toward the outcome. Although the individual agencies have goals that relate to the outcome, we did not review their reports and plans because we believe the overview should provide a comprehensive look at the agencies’ progress. As agreed with your staff, our evaluation was generally based on the requirements of GPRA, the Reports Consolidation Act of 2000, guidance to agencies from OMB for developing performance plans and reports (OMB Circular A-11, Part 2), previous reports and evaluations by us and others, our knowledge of Interior’s operations and programs, our identification of best practices concerning performance planning and reporting, and our observations on Interior’s other GPRA-related efforts. We also discussed our review with agency officials and with Interior’s Office of Inspector General. The agency outcomes that were used as the basis for our review were identified by the Ranking Minority Member of the Senate Governmental Affairs Committee as important mission areas for the agency and do not reflect the outcomes for all of Interior’s programs or activities. The major management challenges confronting Interior, including the governmentwide high-risk areas of strategic human capital management and information security, were identified by GAO in our January 2001 performance and accountability series and high-risk update and by Interior’s Office of Inspector General in December 2000. We did not independently verify the information contained in the performance reports and plans, although we did draw from our other work in assessing the validity, reliability, and timeliness of Interior’s performance data. We conducted our review from April 2001 through June 2001 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Department of the Interior for its review and comment. Interior chose to meet with us to provide oral comments, and we met with the Director of the Office of Planning and Performance Management and other officials from Interior on June 1, 2001, to discuss these comments. While the officials generally agreed with the report’s findings, they raised a concern that the governmentwide high- risk area of strategic human capital management—which we reviewed in the management challenges section of this report—was first identified by us in January 2001, after Interior had completed much of its planning and reporting for fiscal year 2000. Interior noted that it has included a goal in its fiscal year 2002 plan to begin workforce planning and that it is accelerating its planning in response to an OMB directive to gather data on workforce numbers by June 29, 2001. We agree with Interior officials that the issue of strategic human capital management was identified by us as a high-risk area in January 2001 when, according to Interior officials, it had finished most of its planning and reporting for fiscal year 2000. To address their concern, we revised the report to underscore this date when we first mention the high-risk issues in the report. We also noted, however, human capital issues have been a long-standing problem for many federal agencies and that the inclusion of human resources in performance plans has been required as part of the GPRA process since 1999, when the first performance plans were developed. In addition, OMB Circular A-11 contains guidance to federal departments to discuss strategies, including the planned use of human resources, to achieve goals in their annual performance plans. The Interior officials also provided technical clarifications, which we made as appropriate. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Chairman of the Senate Committee on Governmental Affairs, the Chairman and Ranking Minority Member of the House Committee on Government Reform, the Secretary of the Interior, the Director of the Office of Management and Budget, and other interested congressional committees. This report will also be available on GAO’s home page at http:\\www.gao.gov. If you or your staff have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix II. Table 1 identifies the major management challenges confronting the Department of the Interior, which include the governmentwide high-risk areas of strategic human capital management and information security. Interior has 10 performance reports/plans, including 1 that serves as a departmental overview, 1 each for the 8 major bureaus within the Department, and 1 for the Office of Insular Affairs. The first column of the table lists the management challenges that we and the Department of the Interior’s Office of Inspector General (OIG) have identified. The second column discusses the progress Interior has made in addressing these major management challenges, as discussed in its fiscal year 2000 performance report. The third column discusses the extent to which Interior’s fiscal year 2002 performance plan includes performance goals and measures to address the management challenges that we and Interior’s OIG have identified. While Interior’s performance reports discussed the Department’s progress in resolving many of its management challenges, the Department did not have goals for the management challenge dealing with information security and therefore did not discuss progress in resolving the challenge in fiscal year 2000. Interior has been engaged in workforce planning and information security activities during fiscal year 2001. Interior’s fiscal year 2002 performance plans provided goals and performance measures for most of its management challenges. For Interior’s 14 major management challenges, its performance plans had (1) goals and measures that were directly related to 9 of the challenges; (2) goals and measures that were indirectly applicable to 1 challenge; and (3) no goals and measures related to 3 of the challenges, although strategies to address them were discussed. The last challenge relates to Government Performance and Results Act, which is the subject of this report and therefore was not addressed in the matrix. Arleen Alleman, Julie Gerkens, Susan Iott, Dave Irvin, Lisa Knight, Mike Koury, Jeff Malcolm, Sherry McDonald, Charles Vrabel, and Ned Woodward made key contributions to this report.
This report reviews the Department of the Interior's fiscal year 2000 performance report and fiscal year 2002 performance report plan required by the Government Performance and Results Act. Specifically, GAO discusses Interior's progress in achieving the following four outcomes: (1) maintaining the health of federally managed land, water, and renewable resources; (2) ensuring visitors' satisfaction with the availability, accessibility, diversity, and quality of national parks; (3) meeting the federal government's responsibility to preserve and protect Indian trust lands and resources; and (4) ensuring the safe and environmentally sound development of mineral resources. GAO could not judge the agency's progress in promoting the health of federally managed land, water, and renewable resources because the goals Interior has reported do not foster a broad or departmentwide approach to measuring progress. Although the Park Service's strategies for continuing to meet and exceed its visitor satisfaction and visitor education goals appear clear and reasonable, the agency's fiscal year 2002 performance plan lacks information on the strategic human capital management strategies to achieve this outcome. GAO cannot judge the Bureau of Indian Affairs' progress in protecting Indian trust lands and resources because the annual goals it has established are output-related and do not assess progress toward the outcome. The Minerals Management Service has had mixed results in meetings its mineral development goals. Its goals for meeting its fiscal year 2002 goals seem reasonable.
The establishment of DOE brought together a collection of agencies with diverse institutional cultures, structures, and procedures. Since its inception, funding priorities for the department’s varied mission responsibilities have shifted and new challenges have been added. Over the years, DOE’s ability to effectively fulfill these responsibilities has been repeatedly questioned, with calls for dismantling the department reaching a highpoint in the mid-1990s. We concluded at the time that the Congress and the administration needed to rethink DOE’s missions and structure. To foster a secure and reliable energy system that is environmentally and economically sustainable; to be a responsible steward of the Nation’s nuclear weapons; to clean up the department’s facilities; to lead in the physical sciences and advance the biological, environmental, and computational sciences; and to provide premier scientific instruments for the Nation’s research enterprise. DOE groups these responsibilities into four “business lines,” which DOE describes as follows: Energy resources promotes the development and deployment of systems and practices that provide energy that is clean, efficient, reasonably priced, and reliable; National nuclear security enhances national security through military application of nuclear technology and by reducing global danger from the potential spread of weapons of mass destruction; Environmental quality cleans up the legacy of nuclear weapons and nuclear research activities, safely managing nuclear materials, and disposing of radioactive wastes; and Science advances tools to provide the foundation for the department’s applied missions and to provide remarkable insights into the physical and biological world. Supporting these mission-related business lines is a “corporate management” function that constitutes a fifth “business line.” This function includes putting in place an effective organizational structure; efficient management practices and information systems; procedures to ensure the safety and health of the department’s workforce and the public, and to protect the environment; and practices to ensure accountability to the public. According to DOE, “the department’s success within its diverse portfolio of programs is largely dependent upon a strong and sound corporate management function.” DOE’s budget priorities have gradually shifted over the years from energy policy to defense and now environmental cleanup. In fiscal year 2000, the environmental quality business line was the department’s largest budget category, accounting for approximately 34 percent (about $6.7 billion) of its $19.7 billion budget. National nuclear security follows, with 25 percent of the budget (about $5 billion). Science is allotted 16 percent of the budget (about $3.2 billion), and energy resources, the original responsibility of the department, accounts for 13 percent of the budget (about $2.5 billion). DOE has a workforce of almost 16,000 employees and over 100,000 contractor staff located at over 50 major installations in 35 states. Crucial to DOE’s missions and performance are its 22 laboratories, 11 of which are responsible for multiple programs. Although each of these 11 multiprogram laboratories conducts work in every DOE business line, 3 concentrate on national security issues, 5 on basic science, 2 on environment, and 1 on energy. DOE’s other laboratories are program- specific. The budgets for all 22 laboratories total nearly $8 billion annually. DOE has a complex structure to manage its diverse missions. All staff and support offices at headquarters report to the Secretary of Energy and a deputy secretary, who serves as the chief operating officer. Below them are two under secretaries: one for national nuclear security, who is also the Administrator of the National Nuclear Security Administration (NNSA), and the other for the energy, science, and environmental missions. A variety of deputy administrators, directors, and assistant secretaries are subordinate to the two under secretaries and oversee individual program areas. DOE has an extensive set of field offices, which are responsible for overseeing contractor performance. The field offices include 11 “operations” offices and several smaller, affiliated “area” and “site” offices, which are usually located at contractor sites. For example, DOE has an area office in the Los Alamos National Laboratory that reports to an operations office in Albuquerque, New Mexico. DOE also has other field offices affiliated with the energy resources business line. Contractors manage and operate DOE’s facilities and sites under the supervision of department employees. Given that DOE spends most of its budget through these contractors, the ability of DOE to direct, oversee, and hold accountable its contractors is crucial for its mission success and overall effectiveness. DOE’s contracting practices are rooted in the development of the atomic bomb under the Manhattan Project during World War II. Special contracting arrangements were developed by DOE’s predecessor agencies, with participating industry and academic organizations, to reimburse all of the contractors’ costs and to indemnify contractors against any liability they might incur. Most of the current contractors are for-profit companies that receive incentives for meeting certain performance objectives. Several large contractors, however, are nonprofit institutions, such as the University of California, which typically operate research institutions for DOE. Some of these nonprofit contractors also have financial incentives for achieving certain DOE goals. In August 1995 we reported that a fundamental reevaluation of DOE was warranted, based on prior reviews by us, DOE’s Inspector General and other experts, and our survey of experts. All of these reviews identified serious management weaknesses at the department. Our report was neither the first nor the last to recommend rethinking the department’s structure and mission responsibilities. Our August 1995 report said that DOE had gone through many evolutionary changes since its creation, in part resulting from shifts in priority among its diverse responsibilities. We concluded that even though the department had embarked on some major restructuring, in line with government-wide initiatives to reduce the federal workforce and become more results-oriented, there was no assurance that these reforms would fundamentally alter and improve the ways that DOE managed its missions. We noted that attempting to resolve management weaknesses without first evaluating and achieving consensus on missions was a risky approach to restructuring the department. Overwhelmingly, our survey of experts concurred that DOE must change. While there was general consensus that DOE should retain and concentrate on essential energy activities, opinions differed on where to place other departmental responsibilities. Most experts considered moving the weapons-related and environmental cleanup responsibilities to other federal agencies and creating a new organizational structure for the national laboratories, such as sharing them among federal agencies or, in some cases, privatizing them. We concluded that the ultimate structure of each mission should be determined by the option that encouraged the most cost-effective practices, attracted necessary technical talent, provided ample flexibility to react to changing conditions, and exhibited the highest degree of accountability. In the early to mid-1990s, newly appointed Energy Secretary Hazel O’Leary initiated many reforms to address long-standing criticisms of how DOE conducted its business. As part of this process, DOE commissioned various study groups and panels to make recommendations intended to fundamentally improve the department’s efficiency and effectiveness. Based on these recommendations, DOE launched a series of reforms to realign and downsize the agency, as well as address structural weaknesses and improve its management and oversight of contractors. Many of these reforms achieved their immediate objectives. In 1993, DOE launched an internal initiative to improve safety and awareness of good practices throughout all aspects of the department’s work. The initiative included more attention to risk reduction, improving the qualifications of the workforce, organizational realignment, and moving to external regulation of facilities. In particular, outside reviewers and DOE’s own senior managers questioned the continued justification for the department’s self-regulation of its contractor operated facilities, given that virtually all other federal facilities are externally regulated (including some DOE facilities). In 1994, while legislation was proposed and the Congress held hearings to assess the proposal to move to external regulation, no action was taken. A year later, a DOE advisory committee concluded that secrecy had been used as a shield to deflect public scrutiny of safety and health problems at these facilities, and that the widespread environmental contamination at some facilities was clear evidence that self-regulation had failed. Also in 1993, the Energy Secretary told the Congress that DOE was not adequately in control of its major facility and site contracts and, therefore, “not in a position to ensure effective and efficient expenditures of taxpayer dollars.” To improve this condition, the Secretary created the Contract Reform Team. (We had previously designated DOE contracting practices as high risk, making the department vulnerable to waste, fraud, abuse, and mismanagement. It remains on our high risk list today.) DOE’s contract reform team made more than 45 recommendations, including a call for strengthening financial information systems, using performance- based contracts, and including performance criteria and incentives in contracts. One significant recommendation urged DOE to shift from making noncompetitive contract awards to adopting a full and open competitive process. DOE also commissioned two special task forces in 1993 to examine the quality and effectiveness of the department’s laboratories and the management of its energy research and development (R&D) mission. The Secretary of Energy Advisory Board chartered The Task Force on Alternative Futures for the Department of Energy National Laboratories, chaired by a former chairman of the Motorola Corporation, Robert Galvin, to look at the laboratories. The task force’s final report, issued in February 1995, concluded that DOE’s laboratories were in “serious jeopardy, owing to patterns of management and organization that have grown in complexity, cost, and intrusiveness over a long period.” The report called for a more disciplined research focus by the national laboratories and recommended improvements in DOE management of these facilities, including moving to an independent management structure resembling a government corporation. In response, DOE created the Laboratory Operations Board, an advisory group whose purpose was to provide dedicated management attention to laboratory issues. The Secretary chartered The Task Force on Strategic Energy Research and Development, chaired by energy analyst Daniel Yergin, to examine DOE’s energy resources business line. The June 1995 report of this task force assessed the rationale for the federal government’s support of energy R&D, reviewed the priorities and management of the overall program, and recommended ways to make it more efficient and effective. The task force recommended that DOE streamline its R&D management, develop a strategic plan for energy R&D, eliminate duplicative laboratory programs and research projects, and reorganize and consolidate the many dispersed R&D programs at DOE laboratories. The Galvin and Yergin reports led to many changes in how DOE interacts with its contractors, including a streamlining of departmental orders and procedures. In addition to these improvement efforts, DOE also established a strategic alignment initiative in the fall of 1994, following the results of its extensive strategic planning process. The strategic plan was developed based on the principles of “total quality management” and the desire to increase “stakeholder” participation in decision-making. Under this plan, the department organized itself by “business lines” that were essentially the same as they are today. The first phase of the strategic alignment initiative was employee driven and aimed to identify better, more cost-effective means of performing the core missions of the department as defined in the strategic plan. In May 1995, DOE announced its plan to achieve $1.7 billion in savings over the next 5 years by reducing overhead costs; closing or consolidating field offices; realigning the organizational structure; reducing federal employment; and initiating the delegation of some departmental responsibilities to the private sector (referred to as “privatization”). A portion of the overhead cost savings was to come from externally regulating environment, safety, and health activities; reforming contracting practices; and streamlining departmental oversight. In August 1995, DOE released the specifics of 45 implementation plans, developed in the second phase of the initiative, to guide the cost-saving efforts and improve the department’s performance and accountability. DOE officials were well aware of the criticism aimed at their department in the early 1990s. While maintaining that their own initiatives could transform the department, DOE officials also recognized that others were calling for more radical changes, ranging from organizing the national laboratories under a corporate structure to completely dismantling the department. DOE officials stated in response to our August 1995 report that while there is “no assurance DOE’s initiatives will succeed, we know that no alternative approach can provide that assurance either.” The department continued to assert that its reforms, unprecedented in its history, would transform the department into a “positive model of organizational change and effectiveness.” According to the Deputy Secretary at the time, the department’s initiative promised to “fundamentally alter how we look and how we conduct business….” Unresolved management weaknesses have led to recurring performance problems within DOE. Our analysis of more than 200 audit and consultant reports issued since 1995 that pertain to the department identified persistent weaknesses in the integration of strategic plans and information systems; clarification of the respective roles and responsibilities between headquarters and field offices; maintenance of a technically qualified workforce; and implementation of contract management reforms. While many of DOE’s reforms have achieved their immediate objectives, weaknesses persist and have been linked to wide-ranging performance problems, including major cost overruns and schedule delays in a variety of noteworthy projects. DOE has steadily improved its strategic and annual performance plans in response to past criticism. However, the department has not been able to use its strategic plan and other corporate management tools, such as a department-wide information system, to organize and integrate its missions. According to DOE, its strategic plan is a composite of plans guiding the activities of its major programs within the four business lines. This approach has created some management problems that have been identified in our past reports, in particular: Disconnects exist between the current strategic “business lines” and the way the department is actually organized. While DOE’s strategic goals and objectives are stated within the context of the business lines, the department is organized and managed by its multiple programs. In some cases, several programs contribute to the same business line without any apparent integration. While we have called on DOE to rectify this misalignment, it has not done so. DOE has asserted that its structure is affected by external factors and that no single alignment will yield an organization that eliminates crosscutting objectives. DOE told us that it has therefore organized itself around budget decision units and set program performance measures that are linked to each strategic plan business line. Shortcomings persist in program planning and priority setting, as well as in the use of strategic goals and measures to describe specific activities. For example, we could not determine from DOE’s 1999 and 2000 accountability and performance reports what the department was trying to accomplish. We also noted that DOE had not corrected the problems in its strategic goals and measures that we identified 2 years ago. According to DOE, changes were made in the FY 2001 Annual Performance Plans to track accomplishments by budget decision units rather than the strategic plan. DOE has not been able to develop a single strategic plan that integrates its vast laboratory network. The laboratories, particularly the multiprogram ones, operate largely as separate entities. DOE has no central program control over the laboratories, but has instead required that each report to a lead headquarters program office since 1999. Integration into the strategic plan is supposed to occur through the interests of the headquarters offices, even though the major laboratories conduct work in all business lines. DOE does not have an integrating management information system to consolidate its business, organizational, and operational information throughout the department. In the absence of such an integrating system, mission and program areas have developed their own systems and procedures. A September 2000 DOE Office of Inspector General report noted that duplicative systems existed or were under development at virtually all organizational levels within the department. DOE has acknowledged that a significant barrier to greater departmental integration of information systems has been the Chief Information Officer’s lack of control and influence over the program budgeting processes. Problems continue with the validity and verifiability of the data used by the information systems to provide a baseline from which to track performance across many parts of the department. Since 1995, there have been a number of attempts to clarify roles and responsibilities between headquarters and field staffs to improve lines of authority and accountability. A resolution for this management issue has been elusive because of the way DOE oversees its contractors. Typically, field office managers sign contracts and rate contractors on their performance, but direction on programs or project work comes from the headquarters program offices. Additionally, at least in the past, headquarters staff offices have been allowed to give direct orders to field offices outside of the formal chain of command. The reports that we reviewed frequently cited problems with such intermingled roles and responsibilities. A 1997 study by the Institute for Defense Analyses revealed that the coordination between DOE programs is an “undisciplined, uncoordinated, essentially ad hoc process between the field managers and each of the program assistant secretaries.” The institute concluded that there was no assurance that resource decisions are weighed against each other in a complete and consistent manner. A 1999 Panel to Assess the Reliability, Safety and Security of the United States Nuclear Stockpile reported that DOE suffered from a diffusion of functional responsibilities across a range of staff and line organizations that has led to clouded lines of authority and blurred responsibilities and accountability. In 1999, the President’s Foreign Intelligence Advisory Board reported that DOE’s “decentralized structure, confusing matrix of cross cutting and overlapping management, and shoddy record of accountability has advanced scientific and technological progress, but at the cost of an abominable record of security.…” The board labeled DOE’s organization as a “dysfunctional” structure that has too often resulted in mismanagement of security in weapons-related activities and in a lack of emphasis on counterintelligence. The board concluded that “for the past two decades, the Department of Energy had embodied science at its best and security at its worst.” A 1999 National Research Council review of DOE’s project management problems found that DOE’s “organizational structure makes it much more difficult to carry out projects than in comparable private and public sector organizations.” The council noted that by operating as an aggregate of independent agencies amid various program and field operations offices, DOE had failed to benefit from economies of scale. In 1999 and 2000, we attributed problems at DOE’s Spallation Neutron Source project under construction in Oak Ridge, Tennessee, and at DOE’s National Ignition Facility being built in Livermore, California, to, among other problems, DOE’s complex management and organizational structure and unclear lines of authority. A March 2000 National Academy of Public Administration report on DOE’s Energy Efficiency and Renewable Energy Office found that the office had suffered from unclear roles and responsibilities among various organizational levels. The Academy noted that there are “significant differences in understanding of the roles and responsibilities for program and project management.” Recognizing these problems, DOE has changed reporting relationships between headquarters and field offices in an attempt to clarify lines of authority and to strengthen accountability. The latest major realignment occurred in 1999 with the assigning of field offices to lead program secretarial offices at headquarters. In addition, a Field Management Council was established to coordinate the direction given to the field by program and support offices. DOE’s field offices now report to whichever headquarters program office provides the most funding to the contractor sites overseen by the field managers—an approach used without success in the past. This realignment had to be modified slightly in late 2000 to accommodate the establishment of NNSA. The current reporting arrangement, however, has given rise to some new management problems. We found, for example, that there is considerable uncertainty about reporting relationships in situations where many different headquarters programs support activities at shared facilities and complexes. This problem is particularly acute at DOE’s multiprogram national security laboratories, where work is conducted on all of DOE’s missions, yet field management must report only to NNSA headquarters. Thus, non-NNSA program staff in headquarters must work through NNSA management in the field to accomplish work related to the science and environmental missions. Conversely, some NNSA staff members work in field offices that report to headquarters programs in science or environmental management, even though they can receive direction only from NNSA. Various memorandums of agreement have been created to sort out these arrangements and to provide support services across business lines. However, staff in some field offices that we visited told us that they are unsure how the new reporting relationships will work. The establishment of NNSA has yet to clarify roles and responsibilities within the nuclear security business line and may have exacerbated reporting relationships, at least temporarily. In early 2001, we and the Panel to Assess the Reliability, Safety and Security of the United States Nuclear Stockpile challenged NNSA to develop a plan for fundamentally redefining roles and responsibilities among its headquartered and field organizational units. The panel called on NNSA to “clarify functional authority, reduce management layers, eliminate micromanagement [of the laboratories], and downsize.” As late as April 2001, we found that NNSA had not specified the roles and responsibilities of each of the headquarters offices; the relationship between the headquarters and the field offices; whether headquarters or field offices will direct and oversee contractors; and the relationship between the NNSA staff and the rest of DOE. In NNSA’s May 2001 interim report, the administration stated that it intended to seek expert advice on clarifying relationships between headquarters and the field, as well as on other issues in preparation for an October 2001 status report to the Congress. On June 26, 2001, in testimony before the House Armed Services Committee, the chairman of the Panel to Assess the Reliability, Safety and Security of the United States Nuclear Stockpile noted that “some of the more fundamental management problems [with DOE] still remain to be addressed.” Lack of technically qualified staff within DOE is another long-standing management weakness that has been linked to performance problems. We have raised concerns about this weakness since 1991, and many other external reviewers have echoed these concerns since then. For example, a 1997 report by the Institute for Defense Analyses pointed out deficiencies in the technical capabilities of those DOE managers who had survived departmental downsizing. In addition, the Defense Nuclear Facilities Safety Board warned in 1997 that, given likely future reductions in DOE’s budget, the department needed to make advance preparations to avert the loss of technically competent safety personnel. Responding to these and other concerns, the department announced a new Workforce for the 21st Century Initiative to strengthen technical and management capabilities for its mission requirements. In particular, a 1998 internal DOE study confirmed the need to develop programs to address workforce management weaknesses in the procurement environment, such as recruitment, retention, and succession planning. However, despite these actions, additional internal and external reports that followed have raised concerns about the qualifications of DOE’s workforce. We reported in 1999 that while the Spallation Neutron Source project appeared to be on schedule, it had already exhibited warning signs of failure because it lacked personnel with technical skills and managerial experience. In 1999, the Commission on Maintaining United States Nuclear Weapons Expertise found that DOE’s aging workforce, the tight market for talent, the lack of a long-term hiring plan, and other constraints had raised serious doubts that the department would be able to maintain its nuclear weapons expertise in the future. In 1999, the National Research Council found that DOE did not have “the necessary experience, knowledge, skills, procedures or abilities to prepare good performance measures” for its contracts. In its fiscal year 2001 Annual Performance Plan, the department stated that it had “fully addressed” the lack of technical and management skills by establishing a Corporate Education, Training and Development Plan in fiscal year 1999. DOE pointed out that it had training programs in place for procurement professionals, property managers, and information management specialists, and that it was establishing a new program to rebuild a talented and well-trained corps of R&D technical program managers. In particular, DOE reported in March 2000 that it had initiated a program to develop future leaders of the acquisition workforce. The Defense Nuclear Facilities Safety Board’s 2000 report credited DOE with taking steps to improve the technical capabilities of personnel at its defense nuclear facilities, but pointed out the need for DOE’s leadership to pay increased attention to this issue and to follow through with its improvement plan. Notwithstanding these efforts, the department has now acknowledged that its workforce weaknesses represent a much broader challenge encompassing the larger arena of human capital management. In commenting on a draft of this report, DOE said it had additional efforts in workforce restructuring. In support, DOE officials provided us with its September 2001, “Five-Year Workforce Restructuring Plan,” prepared in response to an Office of Management and Budget requirement of all federal agencies. The plan describes itself as a “corporate roadmap” for, among other things, reducing manager and organizational layers, increasing spans of control, and redeploying positions. DOE has made process improvements in its contracting by implementing many of the 1994 contract reform team recommendations. For example, DOE has increased competition, imposed greater contractor liability, phased in performance-based incentives, and begun using results-oriented statements of work. According to DOE, 26 of its 37 major site and facility management contracts have now been competed, up from just 3 prior to 1994. All of these new contracts employ performance-based techniques in defining contractor requirements, evaluating performance, and linking financial incentives to results. In addition, according to DOE, there has been an overhaul and standardization of contract regulations and the issuance of guidance on proper contract administration. Nonetheless, the department has been criticized for not fully implementing its contract reforms, as noted in several reports. In an October 1997 report, DOE’s Inspector General reported problems with performance-based contracting at DOE’s Nevada Operations Office. The report found that performance-measurement milestones had been estimated after the work had actually been completed. In addition, performance measures associated with this aspect of the contract were vague, leading DOE to reward performance that could not be objectively validated. In May 1999, we reported that while DOE laboratory contracts we examined had some performance-based features, there was a wide variance in the number of performance measures and the types of fees negotiated. We also found that DOE had not determined whether giving higher fees to encourage superior performance by laboratory contractors is advantageous to the government. The National Research Council’s 1999 report concluded that DOE has had limited success in establishing and managing performance-based contracts. In its 2001 follow-up report, the Council noted that DOE has yet to devise and implement either a contract performance measurement system or an information system that can track contracts and contractor performance while cycling information back into key decisions. DOE’s Inspector General reported in April 2000 that performance-based incentives in the contract for DOE’s Idaho National Engineering and Environmental Laboratory had not been fully successful in improving performance and reducing costs. For some incentives, performance declined or remained unchanged. For other incentives, performance improved, but the gains were overstated, the contractor was compensated twice, improvements either could not be linked directly to actions taken by the contractor during the incentive period or were made for a disproportionately high fee, and the contractor could not demonstrate any reduction in cost. DOE’s Inspector General has also identified other areas where contract reforms have not been fully implemented, including the following: A November 1998 audit determined that 16 of DOE’s 20 major for-profit operating contracts did not incorporate liabilities provisions called for under contract reform. A December 1999 audit concluded that the department’s award procedure “effectively circumvented federal requirements designed to promote and ensure the appropriate use of competition in contracting.” A January 2000 audit of outsourcing opportunities at the Los Alamos National Laboratory determined that although the laboratory contractor found that only 4 of 184 support services could potentially be obtained at lower cost from outside entities, in fact at least 128 had outsourcing potential. A February 2000 audit found that only one of the four contractors reviewed had fully met a requirement to prepare “make-or-buy” plans to obtain supplies and services on a least-cost basis. A January 2000 summary report on management challenges facing DOE pointed out that while incentives have been included in most contracts, reviews show systemic weaknesses in the way these incentives have been administered. Incentive fees have risen dramatically, but there has been no commensurate increase in financial risk to DOE’s major contractors. DOE has also struggled to effectively implement its privatization program, which is intended to keep the department’s environmental cleanup projects on schedule at budgeted costs. For example, the cleanup contracts were terminated at two noteworthy privatization projects—the Hanford tank-waste project and the Idaho Pit 9 cleanup project—because of concerns with rapidly escalating costs and the contractor performance. Finally, while DOE has increased the number of major site and facility contracts that it awards competitively, several major contracts have not been, including nine contracts with a combined value of $22 billion. Furthermore, despite glaring performance problems at certain laboratories, DOE has excluded its largest laboratories from full and open competition. For example, DOE’s contracts with the University of California to operate two national laboratories have not been opened to competitive bidding since they were awarded over 50 years ago, despite reported security and project management problems at these laboratories. In commenting on a draft of this report, DOE said that it has not been required to competitively award these types of contracts (Federally Funded Research and Development Centers) and that it “actively considers the use of competitive procedures for such contracts and has competed them where appropriate.” DOE also said that it retained its contracts with the University of California based on “national security considerations.” Several of the unresolved management weaknesses that we identified have been linked to recurring problems with the management of programs and projects. In 1997, we documented that over a 16-year period, of 80 DOE projects started that cost at least $100 million each, only 15 were completed, with most of these experiencing scheduling delays and cost overruns; 31 were terminated; and the 34 ongoing projects were exhibiting scheduling delays and cost increases. Since 1995, DOE and its contractors have drawn a litany of criticism for poor performance on several specific projects, including the following. DOE projects commonly overrun their budgets and schedules, leading to pressures for cutbacks that have resulted in facilities that do not function as intended, projects that are abandoned before they are completed, or facilities that have been so long delayed that, upon completion, they no longer serve any purpose. In short, DOE’s record calls into question the credibility of its procedures for developing designs and cost estimates and managing projects. The Council not only reiterated a listing of past project failures, but also noted that 26 major projects under review at the time of its study were showing notable deficiencies in project management. The report concluded that DOE’s prior efforts to solve project management problems had been so unsuccessful that achieving improvements in this area would require fundamental changes in organizational structures, documents, policies and procedures, as well as drastic changes in the “culture” of the department. DOE acknowledged the persistence of problems in its project management practices in the department’s fiscal year 2001 performance and accountability report. DOE stated that “the results from 33 independent external project reviews, undertaken this past year, indicate serious systemic issues needing correction. Among the most prevalent problems are inadequacies in technical scope, schedule planning and control, cost estimating, and lack of clarity on roles and responsibilities.” In response to the Council’s 1999 recommendations for improving project management in DOE, the department created the central Office of Engineering and Construction Management and affiliated support offices in the three largest departmental program offices. These offices intend to create new policies and procedures, conduct independent project reviews, and train staff in project management practices. The department also plans to create a career track for project managers. However, a follow-up report by the Council in January 2001 raised concerns about DOE’s leadership commitment to implementing the report’s recommendations, particularly regarding the role of the Office of Engineering and Construction Management. In commenting on a draft of this report, DOE said that many of its projects are "unique, one-of-a-kind" ventures that contain significant research and development which can impact cost and schedule assumptions. We agree with DOE that its projects are often challenging. We also agree that such challenges are not an excuse for poor project management performance, a common problem in many DOE activities. The persistence of DOE management weaknesses and project problems, despite the many actions taken by the department to improve its performance, are indicative of underlying impediments that have not been addressed. We found that the department’s diverse missions, dysfunctional organizational structure, and weak culture of accountability impede fundamental improvement at DOE. Unless these underlying and interrelated impediments are addressed, DOE’s management and performance problems will likely continue. Fundamental improvement in DOE’s performance is impeded by the difficulty of effectively integrating the management of the department’s diverse missions. DOE’s energy, environmental, science, and national nuclear security staffs operate largely as separate entities within the department, maintaining their own operating styles and decision-making practices. For example, some mission areas retain strong central control over their programmatic actions, as in the science area, while others delegate more of this responsibility to the field, as in the environmental area. Uncoordinated and inconsistent direction from program headquarters offices still places the burden of effectively integrating varying goals, objectives, and management styles on the field managers who must manage this diversity at shared facilities. The National Research Council’s 1999 report on DOE project management noted that “cultures, attitudes and organizational commitments have shaped service delivery, and as DOE’s missions changed in response to external conditions, the diversity of cultures inherited by the department’s collection of agencies did not necessarily change with it.” This diversity of mission cultures under one roof has long prevented DOE from developing a consistent approach in its systems, structures, and interactions with contractors. For example, DOE’s national security programs have a long history of operating in secret, which leads to practices that are quite different from DOE’s science programs, which are more open and flexible—yet these programs operate at shared facilities. This clustering of diverse programs has complicated lines of authority, thus diluting accountability among staff, and has impeded DOE’s ability to oversee contractors. It has been difficult for DOE to meet all the priorities of its mission programs and the requirements of the department staff offices. For example, more management attention has sometimes been given to DOE contractors meeting nuclear weapons program goals than to operating safely and in an environmentally responsible manner. The widely publicized security problems at the Los Alamos National Laboratory in 1999 and 2000 are another example. DOE’s contract with Los Alamos contained few incentives for controlling classified material but many rewards for high quality science work—yet this work was taking place in a top-secret laboratory, whose primary mission is designing nuclear weapons. As a result, although laboratory staff performed security tasks poorly, such lapses had limited impact on the lab contractor’s overall DOE rating and subsequent performance fee. In the future, the task of integrating diverse missions will likely be complicated by the need to place additional emphasis on DOE programs that play a role in ensuring homeland security. Such programs include critical infrastructure protection; nonproliferation programs, which aid in keeping nuclear material and weapons knowledge out of the hands of terrorists; R&D; and emergency preparedness. Over the last decade or so, DOE has undertaken major departmental shake-ups every two or three years. None have stemmed recurring fundamental problems and all have been thwarted by institutional intransigence. The most problematic organizational problems have involved the nuclear weapons complex. Years of tinkering with reporting relationships between the offices that have a role in national nuclear security and the laboratories where most weapons-related work is performed have not yielded many positive results. For example, the Special Investigative Panel of the President’s Foreign Intelligence Advisory Board noted in its 1999 report that “convoluted, confusing, and often contradictory reporting channels have made the relationships between DOE headquarters and the laboratories, in particular, tense, internecine, and chaotic.” In addition, the panel found that much of the confusion centered on the role and power of the field offices. As the panel reported, “senior DOE officials often described these offices as redundant operations that function as shadow headquarters, often using their political clout and large payrolls to push their own agendas and budget priorities in the Congress.” To address long-standing security problems across the nuclear weapons complex, the panel concluded that because “DOE was incapable of reforming itself—bureaucratically and institutionally—in a lasting way,” an autonomous structure should be established for the national nuclear security business line, free of all other obligations imposed by DOE management. Specifically, the panel recommended creation of a new agency that is far more mission-focused and bureaucratically streamlined. Instead, the semiautonomous NNSA was established within the department. DOE and NNSA officials are now attempting to develop and implement an organizational plan that can operate effectively within DOE’s overall field and headquarters structure. Historically, DOE’s efforts to reorganize assumed that current missions will be retained under any new structure. However, as DOE’s Laboratory Operations Board concluded in December 2000, the creation of NNSA will present organizational and management challenges, especially maintaining a national laboratory system that can meet the department’s current mission requirements. Making changes in the current environment is further complicated by the need to consider DOE’s potentially expanded role on homeland security matters on overall departmental missions. DOE’s lack of a strong culture of accountability is the third basic impediment to improved performance. A number of factors have weakened accountability in the department. DOE’s organizational structure, which has blurred lines of authority, has made it difficult to hold staff and contractors accountable for poor performance. In addition, DOE has not taken action to improve the accountability of the organization in other areas that were identified in the mid-1990s. These pertain to contracting practices, health and safety regulation, and human capital management. The reluctance of past Secretaries to open all major DOE site and facility contracts to competitive bidding has diluted accountability by weakening the department’s position with its contractors. Only once has DOE fired a contractor for performance problems (at Brookhaven National Laboratory in May 1997), and rarely has it taken aggressive action to hold contractors accountable, even in the face of major project failures. DOE’s shifting policies on external regulation also reflect DOE leadership’s ambivalence toward accountability. Despite the position of former Secretary O’Leary—and her internal managers and consultants— that external regulation would give DOE credibility and make its facilities safer, subsequent leaders reversed course. At first, Secretary Federico Peña, O’Leary’s successor, slowed the process by ordering a pilot program of external regulation concepts. His cautious approach was meant to test how regulators might treat DOE, and at what cost. His successor, Secretary Bill Richardson, concluded that external regulation was not worth pursuing because the costs would likely outweigh the benefits. However, this position conflicted with DOE’s own pilot program results and was inconsistent with conclusions reached by the Nuclear Regulatory Commission and the Occupational Safety and Health Administration— DOE’s likely regulators. Finally, DOE’s leadership has not devoted enough attention to recruiting and training a qualified technical workforce, even though these needs have been known for over a decade. Without such staff, the department lacks the expertise to direct and oversee contractors working on highly technical matters and hold them accountable for poor performance. Past DOE leadership has not succeeded in transforming the Department into an effective agency, as shown by the persistence of management weaknesses that have led to the performance problems documented in this report. Historically, DOE has made piecemeal changes in response to problems or criticisms without assessing the root causes of its management weaknesses: DOE’s diverse missions, dysfunctional organizational structure, and weak culture of accountability. While DOE should take immediate steps to strengthen accountability, addressing the impediments to improved performance stemming from its diverse missions and dysfunctional organizational structure will require consultation with the Congress and other federal agencies. Since 1995, legislation has been introduced each year to eliminate DOE and transfer its missions to other agencies, or to terminate some of its R&D programs and laboratories. The establishment of NNSA might suggest opportunities to reconfigure other business lines, as some have suggested for the Office of Science. While the program activities of the department are important, that does not mean that all can be best managed under one agency or that each is inherently governmental. DOE must also have an organizational structure that effectively meets the needs of the department’s missions. However, given the current diversity of these missions, the semi-autonomous status of the NNSA, and shifting mission emphases, such as protecting energy infrastructure, establishing an optimum structure embracing all of DOE’s missions may simply not be possible. New leadership, ongoing organizational changes, and the need to consider how DOE’s responsibilities contribute to homeland security missions, make this an opportune time to address the root causes of performance problems in DOE. To address its diverse mission and organizational issues, we recommend that the Secretary of Energy, in consultation with the Office of Management and Budget and other federal agencies that might gain or lose missions if DOE were reconstructed, develop a strategy for determining whether some missions would be managed better if located elsewhere, combined with other agencies, or privatized. Once this is accomplished, the Secretary should report his findings and a proposal to realign the various missions to the Congress. Pending the results of a comprehensive review of DOE’s missions, the Secretary of Energy should take immediate steps to improve the department’s accountability. Such steps should include, for example, ensuring that all contract-reform initiatives already under way are completed, holding staff and contractors strictly accountable for performance, ending self regulation of worker and nuclear safety in its facilities, and developing a more technically competent workforce. In commenting on a draft of our report, DOE said that the Secretary "recognizes and accepts" many of our points and has already "instituted a path forward for achieving his vision of excellence." DOE also noted that its management challenges are "enormous" and efforts to resolve them "will take time." An important effort under way, according to DOE, is its "strategic mission review," for which a report is due in January 2002. According to DOE, the purpose of this review is to focus the department on activities that best support its "overarching national security mission." DOE also listed several other steps that it said will help clarify roles and responsibilities, streamline its organizational structure, and instill stronger accountability among federal and contractor staff. Further, DOE said it has launched initiatives to "determine why previously identified problems have not been addressed." Finally, the department said that the sum of its ongoing initiatives should enable it to "achieve the spirit" of our recommendations to improve mission, structure, and accountability. DOE's many initiatives, if fully implemented, address several management challenges that have long plagued the department. However, while it is too early to assess the effectiveness of these initiatives, we are concerned that they may not adequately address the root causes of DOE’s recurring performance problems, particularly those related to the department's diverse missions. For example, while we applaud the Secretary's efforts to provide a strategic focus to guide all program activities, it is unclear how a “national security” mission can subsume each of DOE’s highly diverse programs in science, environmental quality, and energy resources. Developing measurable national security objectives for environmental management, DOE’s largest budget category, will be particularly challenging. Also, it appears that DOE's "strategic mission review" assumes that each of its many missions is still best managed by the department. As we noted in our report, many of DOE's structure and accountability problems stem from the nearly impossible task of managing diverse (and sometime conflicting cultures) within a common field structure. The role and responsibility problems that result from this condition will likely persist, absent a comprehensive evaluation of how and where best to manage each mission. The creation of NNSA was an attempt to resolve some of these issues internally, but the effectiveness of its management structure and associated processes is still highly uncertain. In particular, DOE has still not clearly defined roles and responsibilities for NNSA’s headquarters and field units or relationships with the rest of the department. DOE's task of developing an integrated department is made more difficult by an expanding mission emphasis on safeguarding energy infrastructure and enhancing homeland defense against terrorist threats. We believe that with these new mission emphases and the persistent questions about how NNSA will operate relative to other DOE programs, it is more important than ever for a strategic mission review to focus on determining whether some missions would be managed better if located elsewhere, combined with other agencies, or privatized. As we explained in our report, a comprehensive mission assessment would require the Secretary to consult with the Office of Management and Budget and other federal agencies that might gain or lose missions if DOE were restructured. Many of the organizational changes cited by DOE are positive steps, such as clarifying the roles of the deputy and undersecretary, and creating a Field Management Council to facilitate cooperation among the department’s diverse programs. However, past experience has shown that such process changes have merely tinkered with a flawed structure. Without a serious effort to consider each mission for its proper placement in or out of DOE, the structural problems that have clouded roles and responsibilities will likely persist. Therefore, we reaffirm our recommendation that DOE develop a strategy for realigning its missions, followed by a proposal to the Congress. Finally, while DOE cited numerous initiatives to strengthen accountability, it is too early to judge whether these and other efforts adequately address our recommendation in this area. In particular, we note that none of the initiatives cited by DOE would end self-regulation of nuclear and worker safety in its facilities. Moreover, DOE leadership has not been able to fully implement and sustain past initiatives aimed at improving accountability among federal and contractor staff. Appendix III includes the full text of DOE's comments and our response. We conducted our review from November 2000 through September 2001 in accordance with generally accepted government auditing standards. Appendix I provides details about the scope and methodology of our review. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 15 days from the date of this letter. We will then send copies to the Secretary of Energy; the Director, Office of Management and Budget; appropriate congressional committees; and other interested parties. We will also make copies available to others on request. We conducted our analysis primarily through an assessment of more than 200 external and internal reviews of the Department of Energy (DOE) since August 1995. We selected this date as a baseline because it coincides with our first call to assess DOE’s structure and missions, based on a series of prior reports on the department. In addition, we relied on information from interviews and internal documents obtained previously from DOE headquarters in Washington, D.C., and operations offices in the field that are affiliated with the three largest program offices. These field offices included the Oakland Operations Office in California, aligned with the National Nuclear Security Administration (NNSA); the Chicago Operations Office in Illinois, aligned with the Office of Science; and the Savannah River Operations Office in South Carolina, aligned with the Office of Environmental Management. To describe actions taken by DOE to improve its performance by the mid-1990s, we reexamined our 1995 report on a framework for restructuring DOE and its missions. We also reviewed documents pertaining to the reforms initiated by DOE at the time of our report, including the results of several noteworthy task forces that were established by the department. We relied primarily on the department’s comments on our August 1995 report to represent DOE’s position on the significance of its initiated reforms. To assess DOE progress since the mid-1990s in addressing management weaknesses and improving performance, we searched our database for reviews of DOE that we published between August 1995 and May 2001. Of the more than 225 reports identified, we selected 121 that addressed DOE corporate management functions, including strategic planning; information technologies; retaining, recruiting and training staff; security; environment, safety and health practices; contracting; program and project management; and national laboratory reform. We prepared summaries of the observations and recommendations contained in each of these reports. We chose not to include reports that addressed either independent agencies within the department or issues that do not consume many DOE resources. Specifically, we excluded reports on the Nuclear Regulatory Commission, the Federal Energy Regulatory Commission, the Power Marketing Administration, the Tennessee Valley Authority, and issues related to global climate change. With the exception of our major management challenges reports on DOE, the reports that we included were limited in scope and addressed only specific issues under review. The reports, therefore, do not cover all of the program and project activities of the department. For example, there was limited review of the department’s energy resources business line. To improve our coverage of the department, we searched other sources of reports to identify 87 additional documents that addressed the department’s performance since 1995. The Congressional Research Service, DOE’s Inspector General, the National Research Council, the National Academy of Public Administration, several DOE task forces and commissions, as well as the department, were among those organizations that prepared these reports. Appendix II lists the reports and other documents that we reviewed. To identify any underlying impediments to more effective management and improved performance at DOE, we reviewed our collection of reports to determine the possible causes behind the recurring management weaknesses. While there was no single source among the reports reviewed that explicitly observed all three of our root causes, there were many documents that mentioned one or two of them as contributing to a departmental culture that resists fundamental change. We assessed the strength and pervasiveness of these root causes, as well as the actions of past DOE leadership, to draw our conclusions and recommendations. We conducted our review from November 2000 through September 2001 in accordance with generally accepted government auditing standards. Department of Energy: Views on the Progress of the National Nuclear Security Administration in Implementing Title 32 (GAO-01-602T, Apr. 1, 2001). Information Security: Safeguarding of Data in Excessed Department of Energy Computers (GAO-01-469, Mar. 29, 2001). Nuclear Cleanup: Progress Made at Rocky Flats, but Closure by 2006 Is Unlikely, and Costs May Increase (GAO-01-284, Feb. 28, 2001). High Risk Series: An Update (GAO-01-263, Jan. 2001). Major Management Challenges and Program Risks: Department of Energy (GAO-01-246, Jan. 2001). Nuclear Weapons: Improved Management Needed to Implement Stockpile Stewardship Program Effectively (GAO-01-48, Dec. 14, 2000). Financial Management: Billions in Improper Payments Continue to Require Attention (GAO-01-44, Oct. 27, 2000). Reinventing Government: Status of NPR Recommendations at 10 Federal Agencies (GAO/GGD-00-145, Sept. 21, 2000). Government Performance and Results Act: Information on Science Issues in the Department of Energy’s Accountability Report for Fiscal Year 1999 and Performance Plans for Fiscal Years 2000 and 2001 (GAO/RCED-00-268R, Aug. 25, 2000). National Ignition Facility: Management and Oversight Failures Caused Major Cost Overruns and Schedule Delays (GAO/RCED-00-271, Aug. 8, 2000). Department of Energy: Uncertainties and Management Problems Have Hindered Cleanup at Two Nuclear Waste Sites (GAO/T-RCED-00-248, July 12, 2000). Nuclear Security: Information on DOE’s Requirements for Protecting and Controlling Classified Documents (GAO/T-RCED-00-247, July 11, 2000). Observations on the Department of Energy’s Fiscal Year 1999 Accountability Report and Fiscal Year 2000/2001 Performance Plan (GAO/RCED-00-209R, June 30, 2000). Nuclear Waste Cleanup: DOE’s Cleanup Plan for the Paducah, Kentucky, Site Faces Uncertainties and Excludes Costly Activities (GAO/T-RCED-00-225, June 27, 2000). Department of Energy: National Security Controls Over Contractors Traveling to Foreign Countries Need Strengthening (GAO/RCED-00-140, June 26, 2000). Nuclear Waste: Observations on DOE’s Privatization Initiative for Complex Cleanup Projects (GAO/T-RCED-00-215, June 22, 2000). Information Security: Vulnerabilities in DOE’s Systems for Unclassified Civilian Research (GAO/AIMD-00-140, June 9, 2000). Nuclear Waste: DOE’s Advanced Mixed Waste Treatment Project: Uncertainties May Affect Performance, Schedule, and Price (GAO/RCED-00-106, Apr. 28, 2000). Nuclear Waste Cleanup: DOE’s Paducah Plan Faces Uncertainties and Excludes Costly Cleanup Activities (GAO/RCED-00-96, Apr. 28, 2000). Federal Research: DOE Is Providing Independent Review of the Scientific Merit of Its Research (GAO/RCED-00-109, Apr. 25, 2000). Low-Level Radioactive Wastes: Department of Energy Has Opportunities to Reduce Disposal Costs (GAO/RCED-00-64, Apr. 12, 2000). Department of Energy: Views on Proposed Civil Penalties, Security Oversight, and External Safety Regulation Legislation (GAO/T-RCED-00-135, Mar. 22, 2000). Nuclear Security: Security Issues at DOE and Its Newly Created National Nuclear Security Administration (GAO/T-RCED-00-123, Mar. 14, 2000). Nuclear Nonproliferation: Limited Progress in Improving Nuclear Material Security in Russia and the Newly Independent States (GAO/RCED/NSIAD-00-82, Mar. 6, 2000). Department of Energy: Views on DOE’s Plan to Establish the National Nuclear Security Administration (GAO/T-RCED-00-113, Mar. 2, 2000). Nuclear Security: Improvements Needed in DOE’s Safeguards and Security Oversight (GAO/RCED-00-62, Feb. 24, 2000). Occupational Safety and Health: Federal Agencies Identified as Promoting Workplace Safety and Health (GAO/HEHS-00-45R, Jan. 31, 2000). Nuclear Weapons: Challenges Remain for Successful Implementation of DOE’s Tritium Supply Decision (GAO/RCED-00-24, Jan. 2000). Nuclear Waste: DOE’s Hanford Spent Nuclear Fuel Storage Project— Cost, Schedule, and Management Issues (GAO/RCED-99-267, Sept. 20, 1999). Department of Energy: Uncertain Future for External Regulation of Worker and Nuclear Facility Safety (GAO/T-RCED-99-269, July 22, 1999). Observations on the Department of Energy’s Fiscal Year 2000 Performance Plan (GAO/RCED-99-218R, July 20, 1999). Department of Energy: Problems in the Management and Use of Supercomputers (GAO/T-RCED-99-257, July 14, 1999). Department of Energy: Need to Address Longstanding Management Weaknesses (GAO/T-RCED-99-255, July 13, 1999). Nuclear Safety: Department of Energy Should Strengthen Its Enforcement Program (GAO/T-RCED-99-228, June 29, 1999). Nuclear Weapons: DOE Needs to Improve Oversight of the $5 Billion Strategic Computing Initiative (GAO/RCED-99-195, June 28, 1999). Department of Energy: DOE’s Nuclear Safety Enforcement Program Should Be Strengthened (GAO/RCED-99-146, June 10, 1999). Department of Energy: Cost Estimates for the Hanford Tank Waste Remediation Project (GAO/RCED-99-188R, May 19, 1999). National Laboratories: DOE Needs to Assess the Impact of Using Performance-Based Contracts (GAO/RCED-99-141, May 7, 1999). Nuclear Waste: DOE’s Accelerated Cleanup Strategy Has Benefits but Faces Uncertainties (GAO/RCED-99-129, Apr. 30, 1999). Department of Energy: Accelerated Closure of Rocky Flats: Status and Obstacles (GAO/RCED-99-100, Apr. 30, 1999). Nuclear Waste: Process to Remove Radioactive Waste From Savannah River Tanks Fails to Work (GAO/RCED-99-69, Apr. 30, 1999). Department of Energy: Key Factors Underlying Security Problems at DOE Facilities (GAO/T-RCED-99-159, Apr. 20, 1999). DOE Management: Opportunities for Saving Millions in Contractor Travel Costs (GAO/RCED-99-107, Apr. 1, 1999). Department of Energy: Usefulness of Performance Plan Could Be Improved (GAO/T-RCED-99-134, Mar. 24, 1999). Department of Energy: Challenges Exist in Managing the Spallation Neutron Source Project (GAO/T-RCED-99-103, Mar. 3, 1999). Nuclear Nonproliferation: Concerns With DOE’s Efforts to Reduce the Risks Posed by Russia’s Unemployed Weapons Scientists (GAO/RCED-99-54, Feb. 19, 1999). Department of Energy: Actions Necessary to Improve DOE’s Training Program (GAO/RCED-99-56, Feb. 12, 1999). Major Management Challenges and Program Risks: Department of Energy (GAO/OGC-99-6, Jan. 1999). Nuclear Weapons: Key Nuclear Weapons Component Issues Are Unresolved (GAO/RCED-99-1, Nov. 9, 1998). Department of Energy: Management of Excess Property (GAO/RCED-99-3, Nov. 4, 1998). Department of Energy: DOE Needs to Improve Controls Over Foreign Visitors to Its Weapons Laboratories (GAO/T-RCED-99-28, Oct. 14, 1998). Nuclear Waste: Department of Energy’s Hanford Tank Waste Project— Schedule, Cost, and Management Issues (GAO/RCED-99-13, Oct. 8, 1998). Nuclear Waste: Schedule, Cost, and Management Issues at DOE’s Hanford Tank Waste Project (GAO/T-RCED-99-21, Oct. 8, 1998). Department of Energy: Problems in DOE’s Foreign Visitor Program Persist (GAO/T-RCED-99-19, Oct. 6, 1998). Nuclear Waste: Further Actions Needed to Increase the Use of Innovative Cleanup Technologies (GAO/RCED-98-249, Sept. 25, 1998). Department of Energy: DOE Lacks an Effective Strategy for Addressing Recommendations From Past Laboratory Advisory Groups (GAO/T-RCED-98-274, Sept. 23, 1998). Department of Energy: Uncertain Progress in Implementing National Laboratory Reforms (GAO/RCED-98-197, Sept. 10, 1998). Department of Energy: Lessons Learned Incorporated Into Performance- Based Incentive Contracts (GAO/RCED-98-223, July 29, 1998). Information Technology: Department of Energy Does Not Effectively Manage Its Supercomputers (GAO/RCED-98-208, July 17, 1998). Financial Management: Fostering the Effective Implementation of Legislative Goals (GAO/T-AIMD-98-215, June 18, 1998). DOE Management: Functional Support Costs at DOE Facilities (GAO/RCED-98-193R, June 12, 1998). DOE Fiscal Year 1999 Budget Request for Energy Efficiency and Renewable Energy and Financial Management Issues (GAO/RCED-98-186R, June 10, 1998). Department of Energy: Alternative Financing and Contracting Strategies for Cleanup Projects (GAO/RCED-98-169, May 29, 1998). Results Act: Observations on DOE’s Annual Performance Plan for Fiscal Year 1999 (GAO/RCED-98-194R, May 28, 1998). Department of Energy: Clear Strategy on External Regulation Needed for Worker and Nuclear Facility Safety (GAO/T-RCED-98-205, May 21, 1998). Department of Energy: Clear Strategy on External Regulation Needed for Worker and Nuclear Facility Safety (GAO/RCED-98-163, May 21, 1998). Nuclear Waste: Management Problems at the Department of Energy’s Hanford Spent Fuel Storage Project (GAO/T-RCED-98-119, May 12, 1998). Department of Energy: DOE Contractor Employee Training (GAO/RCED-98-155R, May 8, 1998). Department of Energy: Problems and Progress in Managing Plutonium (GAO/RCED-98-68, Apr. 17, 1998). Results Act: DOE Can Improve Linkages Among Plans and Between Resources and Performance (GAO/RCED-98-94, Apr. 14, 1998). Nuclear Weapons: Design Reviews of DOE’s Tritium Extraction Facility (GAO/RCED-98-75, Mar. 31, 1998). Nuclear Waste: Understanding of Waste Migration at Hanford Is Inadequate for Key Decisions (GAO/RCED-98-80, Mar. 13, 1998). Best Practices: Elements Critical to Successfully Reducing Unneeded RDT&E Infrastructure (GAO/NSIAD/RCED-98-23, Jan. 8, 1998). Department of Energy: Subcontracting Practices (GAO/RCED-98-30R, Nov. 24, 1997). Department of Energy: Information on the Tritium Leak and Contractor Dismissal at the Brookhaven National Laboratory (GAO/RCED-98-26, Nov. 4, 1997). Department of Energy: Clearer Missions and Better Management Are Needed at the National Laboratories (GAO/T-RCED-98-25, Oct. 9, 1997). Department of Energy: DOE Needs to Improve Controls Over Foreign Visitors to Weapons Laboratories (GAO/RCED-97-229, Sept. 25, 1997). Results Act: Observations on the Department of Energy’s August 15, 1997, Draft Strategic Plan (GAO/RCED-97-248R, Sept. 2, 1997). Results Act: Observations on Federal Science Agencies (GAO/T-RCED-97-220, July 30, 1997). Nuclear Waste: Department of Energy’s Pit 9 Cleanup Project Is Experiencing Problems (GAO/T-RCED-97-221, July 28, 1997). Nuclear Waste: Department of Energy’s Project to Clean Up Pit 9 at Idaho Falls Is Experiencing Problems (GAO/RCED-97-180, July 28, 1997). Results Act: Comments on Selected Aspects of the Draft Strategic Plans of the Departments of Energy and the Interior (GAO/T-RCED-97-213, July 17, 1997). Results Act: Observations on the Department of Energy’s Draft Strategic Plan (GAO/RCED-97-199R, July 11, 1997). Department of Energy: Status of DOE’s Efforts to Improve Training (GAO/RCED-97-178R, June 27, 1997). High-Risk Program: Information on Selected High-Risk Areas (GAO/HR-97-30, May 1997). Department of Energy: Opportunity for Enhanced Oversight of Major System Acquisitions (GAO/RCED-97-146R, Apr. 30, 1997). Department of Energy: Information on the Distribution of Funds for Counterintelligence Programs and the Resulting Expansion of These Programs (GAO/RCED-97-128R, Apr. 25, 1997). Department of Energy: Funding and Workforce Reduced, but Spending Remains Stable (GAO/RCED-97-96, Apr. 24, 1997). Department of Energy: Plutonium Needs, Costs, and Management Programs (GAO/RCED-97-98, Apr. 17, 1997). Department of Energy: Improving Management of Major System Acquisitions (GAO/T-RCED-97-92, Mar. 6, 1997). Department of Energy: Management and Oversight of Cleanup Activities at Fernald (GAO/RCED-97-63, Mar. 14, 1997). High-Risk Series: Department of Energy Contract Management (GAO/HR-97-13, Feb. 1997). Nuclear Waste: DOE’s Estimates of Potential Savings From Privatizing Cleanup Projects (GAO/RCED-97-49R, Jan. 31, 1997). Nuclear Waste: Impediments to Completing the Yucca Mountain Repository Project (GAO/RCED-97-30, Jan. 17, 1997). Department of Energy: Contract Reform Is Progressing, but Full Implementation Will Take Years (GAO/RCED-97-18, Dec. 10, 1996). Department of Energy: Opportunity to Improve Management of Major System Acquisitions (GAO/RCED-97-17, Nov. 26, 1996). DOE Security: Information on Foreign Visitors to the Weapons Laboratories (GAO/T-RCED-96-260, Sept. 26, 1996). Department of Energy: Observations on the Future of the Department (GAO/T-RCED-96-224, Sept. 4, 1996). Hanford Waste Privatization (GAO/RCED-96-213R, Aug. 2, 1996). Nuclear Weapons: Improvements Needed to DOE’s Nuclear Weapons Stockpile Surveillance Program (GAO/RCED-96-216, July 31, 1996). Information Management: Energy Lacks Data to Support Its Information System Streamlining Effort (GAO/AIMD-96-70, July 23, 1996). Energy Management: Technology Development Program Taking Action to Address Problems (GAO/RCED-96-184, July 9, 1996). DOE’s Cleanup Cost Savings (GAO/RCED-96-163R, July 1, 1996). DOE’s Laboratory Facilities (GAO/RCED-96-183R, June 26, 1996). Energy Research: Opportunities Exist to Recover Federal Investment in Technology Development Projects (GAO/RCED-96-141, June 26, 1996). Department of Energy: Progress Made Under Its Strategic Alignment and Downsizing Initiative (GAO/T-RCED-96-197, June 12, 1996). Federal Facilities: Consistent Relative Risk Evaluations Needed for Prioritizing Cleanups (GAO/RCED-96-150, June 7, 1996). Managing DOE: The Department’s Efforts to Control Litigation Costs (GAO/T-RCED-96-170, May 14, 1996). Energy Downsizing: While DOE Is Achieving Budget Cuts, It Is Too Soon to Gauge Effects (GAO/RCED-96-154, May 13, 1996). Success Stories Response (GAO/OCG-96-3R, May 13, 1996). DOE Cleanup: Status and Future Costs of Uranium Mill Tailings Program (GAO/T-RCED-96-167, May 1, 1996). DOE’s Success Stories Report (GAO/RCED-96-120R, Apr. 15, 1996). Environmental Protection: Issues Facing the Energy and Defense Environmental Management Programs (GAO/T-RCED/NSIAD-96-127, Mar. 21, 1996). Nuclear Weapons: Status of DOE’s Nuclear Stockpile Surveillance Program (GAO/T-RCED-96-100, Mar. 13, 1996). Federal R&D Laboratories (GAO/RCED/NSIAD-96-78R, Feb. 29, 1996). Nuclear Nonproliferation: Concerns With the U.S. International Nuclear Materials Tracking System (GAO/T-RCED/AIMD-96-91, Feb. 28, 1996). Uranium Mill Tailings: Status and Future Costs of Cleanup (GAO/T-RCED-96-85, Feb. 28, 1996). Energy’s Financial Resources and Workforce (GAO/RCED-96-69R, Feb. 28, 1996). Nuclear Waste: Management and Technical Problems Continue to Delay Characterizing Hanford’s Tank Waste (GAO/RCED-96-56, Jan. 26, 1996). Uranium Mill Tailings: Cleanup Continues, but Future Costs Are Uncertain (GAO/RCED-96-37, Dec. 15, 1995). Department of Energy: A Framework for Restructuring DOE and Its Missions (GAO/RCED-95-197, Aug. 21, 1995). Report to Congress on the Plan for Organizing the National Nuclear Security Administration (Department of Energy, National Nuclear Security Administration, May 3, 2001). Special Report: Performance Measures at the Department of Energy (Department of Energy, Office of Inspector General, DOE/IG-0504, May 2001). Prepared Testimony of John A. Gordon, Under Secretary of Energy and Administrator for Nuclear Security, National Nuclear Security Administration, U.S. Department of Energy, Before the Senate Appropriations Committee, Energy & Water Subcommittee (Apr. 26, 2001). Statement of John A. Gordon, Under Secretary of Energy and Administrator for Nuclear Security, National Nuclear Security Administration, U.S. Department of Energy, Before the Special Oversight Panel on Department of Energy Reorganization, Committee on Armed Services, U.S. House of Representatives (Apr. 4, 2001). Audit Report: Bechtel Jacobs Company LLC’s Management and Integration Contract at Oak Ridge (Department of Energy, Office of Inspector General, DOE/IG-0498, Mar. 21, 2001). Science and Technology Issues Facing the 107th Congress: First Session (Congressional Research Service-RL30869, Mar. 1, 2001). Department of Energy: Performance and Accountability Report Fiscal Year 2000 (Department of Energy/CR-0071, Feb. 16, 2001). Federal Managers’ Financial Integrity Act (Department of Energy, Memorandum for the Secretary of Energy from Gregory H. Friedman, Inspector General, CR-L-01-06, Feb. 8, 2001). FY 2000 Report to Congress of the Panel to Assess the Reliability, Safety, and Security of the United States Nuclear Stockpile (Feb. 1, 2001). Eleventh Annual Report to Congress (Defense Nuclear Facilities Safety Board, Feb. 2001). H.R. 376—To Abolish the Department of Energy (107th Congress, Jan. 31, 2001). Interim Letter Report for the Improved Project Management in the Department of Energy (The National Academies, Jan. 17, 2001). The Department of Energy’s Tritium Production Program (Congressional Research Service-RL30425, Jan. 12, 2001). Nuclear Energy Policy (Congressional Research Service-IB88090, Jan. 12, 2001). Civilian Nuclear Waste Disposal (Congressional Research Service- IB92059, Jan. 10, 2001). The National Ignition Facility: Management, Technical, and Other Issues (Congressional Research Service-RL30540, Jan. 4, 2001). Department of Energy Research and Development Budget for FY 2001: Description and Analysis (Congressional Research Service-RL30445, Jan. 3, 2001). Annual Performance Plan for FY 2001 (Department of Energy/CR-0068-9). China: Suspected Acquisition of U.S. Nuclear Weapon Secrets (Congressional Research Service-RL30143, Dec. 20, 2000). DOE Science for the Future: A Discussion Paper (Academic Panel, Dec. 14, 2000). Performance-Based Management at the Department of Energy (External Members of the Laboratory Operations Board, Dec. 7, 2000). Contributions and Value of the Laboratory Operations Board (Department of Energy, Memorandum from Ernest Moniz and John McTague to Bill Richardson, Secretary of Energy, Dec. 7, 2000). Special Report: Management Challenges at the Department of Energy (Department of Energy, Office of Inspector General, DOE/IG-0491, Nov. 28, 2000). United States Department of Energy: Fact Book FY 2000 (Department of Energy, Office of Management and Administration, Office of Management and Operations Support, Nov. 2000). Establishing the National Nuclear Security Administration: A Year of Obstacles and Opportunities (Special Oversight Panel on Department of Energy Reorganization, Committee on Armed Services, U.S. House of Representatives, Oct. 13, 2000). DOE’s Civilian Information Technology Program (Congressional Research Service-RS20626, Oct. 5, 2000). Field Restructuring (Department of Energy, Memorandum for Heads of Departmental Elements from T. J. Glauthier, Sept. 26, 2000). Restructuring DOE and Its Laboratories: Issues in the 106th Congress (Congressional Research Service-IB10036, Sept. 13, 2000). The Department of Energy’s Spallation Neutron Source Project: Description and Issues (Congressional Research Service-RL30385, Sept. 12, 2000). Strategic Plan: Powering the 21st Century—Strength Through Science (Department of Energy/CR-0070, Sept. 2000). Audit Report: Security Overtime at the Oak Ridge Operations Office (Department of Energy, Office of Inspector General, ER-B-00-02, June 21, 2000). Roles and Responsibilities Guiding Principles (Department of Energy, Memorandum from T. J. Glauthier to Under Secretary, Energy, Science, and Environment and Acting Administrator for Nuclear Security, June 2, 2000). Audit Report: Central Shops at Brookhaven National Laboratory (Department of Energy, Office of Inspector General, ER-B-00-01, May 11, 2000). Audit Report: Performance Incentives at the Idaho National Engineering and Environmental Laboratory (Department of Energy, Office of Inspector General, WR-B-00-05, Apr. 3, 2000). Statement of Dan W. Reicher, Assistant Secretary for Energy Efficiency and Renewable Energy, U.S. Department of Energy, Before the Subcommittee on Interior and Related Agencies, Committee on Appropriations, U.S. House of Representatives Oversight Hearing on Energy Conservation Financial Management Procurement (Mar. 30, 2000). Charitable Giving Requirements in Department of Energy Contracts (Department of Energy, Memorandum From the Inspector General to the Deputy Secretary, HQ-L-00-01, Mar. 14, 2000). A Review of Management in the Office of Energy Efficiency and Renewable Energy (National Academy of Public Administration, Mar. 2000). Audit Report: The Department’s Management and Operating Contractor Make-or-Buy Program (Department of Energy, Office of Inspector General, DOE/IG-0460, Feb. 17, 2000). Tenth Annual Report to Congress (Defense Nuclear Facilities Safety Board, Feb. 2000). Strength Through Science—U.S. Department of Energy FY 2001 Budget Request to Congress—Budget Highlights (Department of Energy, Office of Chief Financial Officer, Feb. 2000) Congress and the Fusion Energy Sciences Program: A Historical Analysis (Congressional Research Service-RL30417, Jan. 31, 2000). Audit Report: Follow-up Audit of Program Administration by the Office of Science (Department of Energy, Office of Inspector General, DOE/IG-0457, Jan. 24, 2000). Audit Report: The Management of Tank Waste Remediation at the Hanford Site (Department of Energy, Office of Inspector General, DOE/IG-0456, Jan. 21, 2000). Audit Report: Outsourcing Opportunities at the Los Alamos National Laboratory (Department of Energy, Office of Inspector General, WR-B-00-03, Jan. 18, 2000). Research and Development Budget of the Department of Energy for FY2000: Description and Analysis (Congressional Research Service- RL30054, Dec. 16, 1999). Inspection Report: Inspection of Alleged Improprieties Regarding Issuance of a Contract (DOE/IG-INS-O-00-02, Dec. 16, 1999). Contractor Make or Buy Plan Implementation (Department of Energy, Memorandum from Richard Hopf, Director, Office of Procurement and Assistance Management, to Heads of Contracting Activities, Dec. 6, 1999). Stockpile Stewardship Program: 30-Day Review (Department of Energy, Nov. 23, 1999). FY 1999 Report of the Panel to Assess the Reliability, Safety, and Security of the United States Nuclear Stockpile (Nov. 8, 1999). Department of Energy: Programs and Reorganization Proposals (Congressional Research Service-RL30307, Sept. 17, 1999). DOE Security: Protecting Nuclear Material and Information (Congressional Research Service-RS20243, July 23, 1999). Glauthier Announces DOE Project Management Reforms (Department of Energy Press Release, June 25, 1999). Technology Transfer to China: An Overview of the Cox Committee Investigation Regarding Satellites, Computers, and DOE Laboratory Management (Congressional Research Service-RL30231, June 11, 1999). Science at its Best, Security at its Worst: A Report on Security Problems at the U.S. Department of Energy (A Special Investigative Panel, President’s Foreign Intelligence Advisory Board, June 1999). Changes to the Departmental Management Structure (Department of Energy, Memorandum from the Secretary of Energy to Heads of Departmental Elements, Apr. 21, 1999). Commission on Maintaining United States Nuclear Weapons Expertise: Report to the Congress and Secretary of Energy (Mar. 1, 1999). Ninth Annual Report to Congress (Defense Nuclear Facilities Safety Board, Feb. 1999). Audit Report: The U.S. Department of Energy’s Implementation of the Government Performance and Results Act (Department of Energy, Office of Inspector General, DOE/IG-0439, Feb. 4, 1999). U.S. National Security and Military/Commercial Concerns with the People’s Republic of China (Select Committee, United States House of Representatives, Jan. 3, 1999). Department of Energy: Accountability Report Fiscal Year 1999 (Department of Energy/CR-0069, 1999). Improving Project Management in the Department of Energy (National Research Council, 1999). U.S. Department of Energy Strategic Alignment Initiative, Fiscal Year 1998 Status Report (Department of Energy, 1999). Audit Report: The U.S. Department of Energy’s Efforts to Increase the Financial Responsibility of Its Major For-Profit Operating Contractors (Department of Energy, Office of Inspector General, DOE/IG-0432, Nov. 20, 1998). Audit Report: Project Hanford Management Contract Costs and Performance (Department of Energy, Office of Inspector General, DOE/IG-0430, Nov. 5, 1998). Audit Report: The U.S. Department of Energy’s Prime Contractor Fees on Subcontractor Costs (Department of Energy, Office of Inspector General, DOE/IG-0427, Sept. 11, 1998). Unlocking Our Future: Toward a New National Science Policy (A Report to Congress by the House Committee on Science, Sept. 24, 1998). Audit Report: The Cost Reduction Incentive Program at the Savannah River Site (Department of Energy, Office of Inspector General, ER-B-98-08, May 29, 1998). Inspection Report: The Fiscal Year 1996 Performance Based Incentive Program at the Savannah River Operations Office (Department of Energy, Office of Inspector General, May 1998). Assessing the Need for Independent Project Reviews in the DOE (National Research Council, 1998). Audit of Support Services Subcontracts at Argonne National Laboratory (Department of Energy, Office of the Inspector General, DOE/IG-0416, Dec. 23, 1997). Departmental Reporting Relationships (Department of Energy, Memorandum from J. M. Wilcynski, Manager, Idaho Operations Office, to the Deputy Secretary and Under Secretary, Nov. 26, 1997). Audit Report: Audit of the Contractor Incentive Program at the Nevada Operations Office (Department of Energy, Office of Inspector General, DOE/IG-0412, Oct. 20, 1997). Restructuring DOE and Its Laboratories: Issues in the 105th Congress (Congressional Research Service-IB97012, Oct. 15, 1997). External Members of the Laboratory Operations Board Analysis of Headquarters and Field Structure Issues (Secretary of Energy Advisory Board, Oct. 2, 1997). DOE Laboratory Restructuring Legislation in the 104th Congress (Congressional Research Service-97-558SPR, May 13, 1997). The Organization and Management of the Nuclear Weapons Program: 120-Day Study (Institute for Defense Analysis, Feb. 27, 1997). Seventh Annual Report to Congress (Defense Nuclear Facilities Safety Board, Feb. 1997). Department of Energy Strategic Alignment Initiative Status Report— Fiscal Year 1996 (DOE, Dec. 1996). How to Close Down the Department of Energy (The Heritage Foundation, Nov. 9, 1995). Department of Energy Abolition? Implications for the Nuclear Weapons Program (Congressional Research Service-95-1020F, Sept. 29, 1995). Strategic Alignment: Tracking Our Progress (Department of Energy, Sept. 5, 1995). Energy R&D: Shaping Our Nation’s Future in a Competitive World (Final Report of the Task Force on Strategic Energy Research and Development, June 1995). Alternative Futures for the DOE National Laboratories (Task Force on Alternative Futures for the National Laboratories (Secretary of Energy Advisory Board, Feb. 1995). The following are GAO's comments on the Department of Energy's letter dated November 30, 2001. 1. Our response is included in the body of the report. 2. In our report, we acknowledge and support DOE’s efforts to implement performance-based contracting practices and to competitively award more of its contracts. As suggested, we have revised our report to note that the department has not been required to compete contracts to manage its Federally Funded Research and Development Centers. 3. As we state in our report, our concern is that some of DOE's largest contracts, notably those with the University of California to manage several national laboratories, have never been opened to competitive bidding. According to DOE, the decisions related to the most recent contract extension with this university were based on "national security considerations " and were not "contract management decisions …" The benefits of competing contracts are widely accepted and espoused by DOE in its own policies. Recent interest shown by another university in competing for the Sandia National Laboratory contract when it expires in 2003 suggests that there may be other capable competitors, and that national security considerations do not inhibit DOE from attracting new performers. 4. We agree that DOE sponsors many "unique" projects that contain significant research and development that can impact cost and schedule assumptions, and we have incorporated this comment in our report. Nevertheless, we concur with DOE that this circumstance should not be used as "an excuse for the poor performance in project management" that was cited in our report. 5. We do not concur with DOE that the department’s strategic planning process has worked effectively to organize and integrate its diverse missions. As we said in our report, DOE told us that its strategic plan is a composite of plans that guides the program activities of the department's four "business lines," each of which establishes its own objectives and management systems. Acknowledging the unfocused nature of the department, the Secretary is just now taking steps to define an overarching departmental objective for all programs and to expand NNSA’s new Planning, Programming, Budgeting and Evaluation system department-wide. He is also creating a new office under the Chief Financial Officer that "will analyze and evaluate plans, programs and budgets in relation to the department's objectives…" The department said that it expects this office will serve as the "linchpin" for making improvements in strategic planning in the future. 6. We reported in 1998 that DOE's Strategic Laboratory Missions plan, which was published in 1996, was essentially a descriptive summary of current laboratory activities; it did not direct change. Nor did the plan tie DOE's or the laboratories' missions to the annual budget process. As we previously reported, when we asked laboratory officials about strategic planning, most discussed their own planning capabilities, and some laboratories provided us with their own self-generated strategic planning documents. None of the officials at the multiprogram laboratories we visited at the time mentioned DOE's Strategic Laboratory Missions plan as an essential document for their own strategic planning. 7. We noted in our report that DOE is attempting to clarify roles and responsibilities. We also noted that DOE's 1999 reorganization was similar to steps the department had taken previously without success. While we have not assessed the effectiveness of the new Field Management Council, we noted in our report that the establishment of the NNSA appears to have created, at least temporarily, additional confusion regarding roles, responsibilities, and reporting relationships within the department. 8. We noted in our report that the department has been taking steps to address its workforce problems since the early 1990s, and it continues to do so today. As we said, we are concerned by the lack of succession planning and progress by DOE in addressing known human capital deficiencies. We have revised our report, however, to reflect that DOE published, in September 2001, its "Five-Year Workforce Restructuring Plan." According to DOE, the plan responds to an OMB requirement of all federal agencies and presents a "corporate roadmap" for reducing manager and organizational layers, increasing spans of control, and redeploying staff. The plan describes a variety of ongoing and planned actions. Regarding DOE's discussion of the many underlying factors affecting its staffing, we agree that building a quality workforce is very challenging. As DOE notes, these challenges are made more difficult by the constant changes in mission focus that characterize DOE's history. In addition to those named above, Tom Laetz, Dan Feehan, William Lanouette, Tom Kingham, Linda Chu, James Charlifue, and Cynthia Norris made key contributions to this report.
The Department of Energy (DOE) manages the nation's nuclear weapons production complex, cleans up the environmental legacy from the production of nuclear weapons, and conducts research and development into both energy and basic science. DOE launched several reforms in the 1990s to realign its organizational structure, reduce its workforce, strengthen contracting procedures by competitive awards practices, streamline oversight of activities, and delegate some responsibilities to the private sector. Despite these reforms, GAO found that management weaknesses persist because DOE's reforms were piecemeal solutions whose effect has been muted by three impediments to fundamental improvement: the department's diverse missions, dysfunctional organizational structure, and weak control of accountability. Management weaknesses and performance problems will likely continue unless DOE addresses these impediments in a comprehensive fashion.
Created in 1934, the Eximbank is an independent U.S. government agency that operates under a renewable congressional charter that expires on September 30, 1997. In conducting its operations, the Eximbank must comply with several statutory requirements. The Eximbank is required to supplement and encourage, but not compete, with private sources of seek to reach international agreements to reduce government-subsidized provide financing at rates and on terms that are “fully competitive” with those of other foreign government-supported export credit agencies (ECA) (12 U.S.C. sec. 635 (b)(1)(A)(B)). loans to foreign buyers of U.S. exports, loan guarantees to commercial lenders, export credit insurance to U.S. exporters and lenders, and working capital guarantees for pre-export production. Reflecting the growing move toward privatization in the developing world, the Eximbank has recently expanded its activities to include project finance. Project finance involves financing where repayment is provided through the project’s anticipated future revenues rather than through sovereign (government) or other forms of guarantee. In fiscal year 1997, the Eximbank estimates project-financing deals will account for about 30 percent of its total financing commitments (these deals accounted for about 14 percent of its assistance in 1996). I would first like to discuss the various rationales that have been advanced for and against government involvement in export finance and GAO’s position on this matter. The arguments for and against the programs focus on three issues: (1) trade policy leverage, (2) industry effects, and (3) employment and trade effects. Supporters of the Eximbank export finance programs say that this assistance provides leverage in trade policy negotiations, helps to “level the international playing field” for U.S. business, corrects “market failures,” and helps to increase exports and employment. According to Eximbank officials, the direct and indirect benefits include follow-on sales and support contracts, high-paying jobs, and federal tax revenues. Opponents say that the Eximbank’s programs result in no net increase in national employment and output, misallocate resources, and are a form of corporate welfare. Supporters believe that the Eximbank’s programs (1) help assist U.S. companies to compete against foreign companies that receive similar types of government support and (2) provide leverage in trade policy negotiations. Supporters hold that the Eximbank helps to neutralize the foreign exporter’s advantage in such situations by providing similar financing for U.S. exports. However, critics have questioned the usefulness of these programs in getting countries to reduce subsidies. As discussed below, the foreign competitor countries we studied offer a variety of government-supported export finance programs. As already noted, the Eximbank is required to seek international agreements to reduce government-subsidized export financing. OECD nations, including the United States, have made progress since the late 1970s in negotiating reductions in officially supported export subsidies. U.S. Treasury officials who participate in these negotiations told us that the Eximbank’s programs have provided them with leverage in negotiating subsidy reductions. Another rationale that proponents make is that markets do not always lead to an optimal allocation of resources and that so-called “market failures” provide an additional justification for government export finance programs. Eximbank claims that the following are examples of market failures: Private financial institutions may be unwilling to support exports to emerging markets even when the risk is correctly priced. Foreign buyers in certain markets may be unable to secure long-term financing for capital equipment. Finally, and probably the most often-cited example is that small business exporters may have difficulty in obtaining export financing. Supporters of government export finance programs believe that correcting such “market failures” can improve economic efficiencies and overall economic well-being. Opponents hold that there is no credible evidence that private capital markets do not function efficiently and that government intervention can potentially distort markets. According to the Eximbank, the exports it financed in fiscal year 1996 “supported or maintained” nearly 300,000 jobs. We do not dispute that some jobs are directly supported through the Eximbank’s programs. However, economists and policy makers recognize that employment levels are substantially influenced by macroeconomic policies, including actions of the Federal Reserve. At the national level, under conditions of full employment, government export finance assistance programs may largely shift production among sectors within the economy rather than raise the overall level of employment in the economy. Hence, the jobs figure that the Eximbank reports may not represent net job gains. Others have supported export promotion programs as a way to substantially reduce the U.S. trade deficit. These programs, however, cannot produce a substantial change in the overall U.S. trade balance. The trade balance is largely determined by macroeconomic conditions, such as savings and investment and the government budget deficit. According to the President’s Council of Economic Advisers, significantly reducing the trade deficit will require macroeconomic policy measures, such as eliminating the federal budget deficit. During fiscal years 1994 to 1996, the Eximbank provided an annual average of $12.8 billion in export financing commitments (loans, guarantees, and insurance) at an annual average program cost of $877 million. The Eximbank projects that it will provide about $16.5 billion of export finance support in fiscal year 1997, an all-time high. Program costs are projected to fall from $934 million in fiscal year 1996 to $773 million in fiscal year 1997 and to $681 million in fiscal year 1998 because of a projected increase in lower-risk financing (such as project finance and aircraft transactions, which consume relatively lower amounts of its program budget). (See table I.1.) Another reason for the decrease is that no additional money for tied aid was included in the Eximbank’s fiscal year 1998 budget request. In fiscal year 1996, China was the Eximbank’s top export market ($1.2 billion), followed by Indonesia ($825 million), Mexico ($753 million), Trinidad and Tobago ($632 million), and Brazil ($488 million). (See fig. I.1 for a list of the Eximbank’s top 10 markets and their associated program costs for fiscal year 1996.) Relative to total U.S. goods exported to these markets, the Eximbank supported about 11 percent of U.S. exports to China, about 22 percent of U.S. exports to Indonesia, about 1 percent of U.S. exports to Mexico, about 93 percent of U.S. exports to Trinidad and Tobago, and about 4 percent of U.S. exports to Brazil. During fiscal years 1994 through 1996, the 15 largest users (lead U.S. exporters or contractors) of Eximbank financing accounted for about $14.4 billion, or about 38 percent, of the Eximbank’s total export-financing commitments made during that period. (see fig. I.2). The export finance transactions involving these companies absorbed about 27 percent of the Eximbank’s total program budget, or about $682 million over the same period. However, these data do not capture the full range of U.S. companies associated with Eximbank-financed deals such as subcontractors and other suppliers. About 20 percent ($7.5 billion) of the Eximbank’s financing commitments—about 79 percent of its total transactions—went to small business, primarily through its insurance programs. (See table I.2.) The Eximbank also supports the export of several dual-use (military and civilian) items. (See app. V). The Eximbank has participated in international (OECD) negotiations to limit the use of tied aid and has used its tied aid capital projects fund to counter foreign countries’ use of tied aid. The OECD efforts have resulted in a decrease in reported international levels of tied aid—the annual average level of tied aid decreased from about $10 billion in 1992 to approximately $4 billion in 1995. During 1994-96, the Eximbank board of directors approved the use of war chest funds in 40 instances. (See app. II for a list of firms and countries that actually received war chest assistance in 1994-96.) The balance in the tied aid war chest was $337.7 million as of September 30, 1996. Since fiscal year 1993, the Eximbank has issued guarantees related to 23 project finance deals totaling $5.6 billion (the estimated value of these projects was $21.5 billion). (See table III.1.) Because these projects tend to be large, the Eximbank often shares project risk with other export credit agencies, the Overseas Private Investment Corporation (OPIC), or with multilateral institutions such as the International Finance Corporation. With regard to project finance, the Eximbank’s activity in this rapidly expanding area has increased from one deal in fiscal year 1993 to seven in fiscal year 1996. According to the Eximbank, this growth is a reflection of the rising demand for capital projects in emerging market economies. The six G-7 countries we studied—Canada, France, Germany, Italy, Japan, and the United Kingdom (U.K.)—all have ECAs, each with different roles and structures. (According to Euromoney, a total of 73 ECAs now exist worldwide). The support the G-7 ECAs provide for their exporters can be measured in various ways. In terms of the percentage of national exports these ECAs have financed, the Eximbank is tied for last. In 1995, the Eximbank supported 2 percent of total U.S. exports (the latest year for which comparative data are available). This figure is at the bottom of the range of support provided by the other G-7 nations. In contrast, Japan’s ECAs supported 32 percent of its country’s exports in that year. France was second, with 18 percent. The support provided by Canada, Germany, the United Kingdom, and Italy ranged from 7 to 2 percent. In terms of the share of financing commitments extended by ECAs in 1995, the Eximbank ranks fourth: Japan, France, and Germany accounted for the largest shares. Japan extended over half (56 percent), followed by France (20 percent), and Germany (9 percent). The United States and Canada extended smaller shares—5 percent each—followed by the United Kingdom (3 percent) and Italy (2 percent). Comparing ECA programs is difficult for a number of reasons: Each nation has structured its export financing differently — there is no single export finance model. ECAs in the six nations we studied function as independent government agencies, sections of ministries, or private institutions operating under an agreement with the government. Most of the countries we studied offered overseas investment insurance through their ECA. However, in the United States, overseas investment insurance is offered through a separate agency, OPIC. (Table IV.1 provides a summary of the principal differences between the Eximbank and the six ECAs we studied.) Unlike the Eximbank, other ECAs appear to compete to varying degrees with private sources of export financing. They do not aim to function exclusively as “lenders of last resort,” as the Eximbank strives to do. For example, the Japanese government’s export insurance provider is Japan’s only export insurer and reported that it insured about 28 percent ($124 billion) of all Japanese trade transactions in 1995—the highest level of trade and investment insurance underwriting in the world (private or public). Similarly, Canada’s Export Development Corporation (EDC) does not function as a lender of last resort. The Eximbank aims to complement and not compete with private sources of capital. ECAs also have different fee structures. As stated earlier, the Eximbank must set fees that are “fully competitive” with the pricing and coverage offered by other major ECAs. The Eximbank has interpreted “fully competitive” by setting its fees at levels below most of the foreign competition (as low or lower than about 75 percent of those offered by other major export credit agencies). U.K.’s ECA aims to set fees at levels high enough to cover operating costs. Other ECAs we studied over different amounts of political and commercial risks. Currently, the Eximbank provides 100-percent, unconditional political and commercial risk protection on most of the medium- and long-term coverage (coverage over 5 years) it issues. Other ECAs generally require exporters and banks to assume a portion of the risks (usually 5 to 10 percent) associated with such support. This concept of risk-sharing is a fundamental difference between the Eximbank and these ECAs. Finally, ECAs use different budgetary and reporting standards thus making it difficult to directly compare the Eximbank’s program costs. The 1990 Federal Credit Reform Act (P.L. 101-508, Nov. 5, 1990) requires the Eximbank to estimate and budget annually for the total long-term costs of its credit programs on a net present value basis. Other nations operate on a cash basis and are not subject to similar budget constraints. Under this approach, a government reimburses an ECA for total cash losses sustained on its operations during the year. Moreover, costs reported may not always represent total expenses to the government. For example, Canada’s EDC uses a separate national interest account (“Canada Account”) to support some export finance activity. The costs of this support are accounted for separately in its year-end reports. (Table IV.2 provides information on the costs of the G-7 nations’ export-financing programs.) Although direct cost comparisons between Eximbank and other national programs are difficult to make, the available cost data we reviewed suggests that several ECAs in the six countries we studied have reported improved financial results. France, Germany, and the United Kingdom all reported positive financial results for their ECAs in 1995, the most recent year for which complete information was available. The Berne Union reported that among its member countries there was an aggregate loss of $501 million in 1995 compared with $6.5 billion in 1994. According to the Berne Union, this change was attributed to an improved global debt scenario and tighter ECA underwriting standards. In sum, the Congress may wish to assess Eximbank’s reauthorization within the context of the international competition. While these ECAs operate under different mandates and are subject to different budgeting and reporting standards than the Eximbank, they all help their exporters compete for contracts in the world market. The costs of these programs need to be weighed against their benefits to exporters and the leverage they provide in international negotiations to reduce government support for these types of programs. Mr. Chairman and Members of the Subcommittee that concludes my prepared statement. I will be happy to answer any questions you may have. Table I.1: The U.S. Export-Import Bank’s Financing Commitments and Program Costs, 1994-98 N/A = Not available. Total costs are defined as the Eximbank’s program costs and administrative costs. W estinghouse Elec. Telecom. Political risk coverage only. N/A = Not applicable because no projects reported. At least one completed project with ECA financing. Proposed project(s) with ECA financing. Public, independent government agency. —Statutory mandate to supplement and encourage, but not compete with, private sources of capital. —Receives a credit subsidy appropriation each year from the U.S. Congress. Export Development Corporation (EDC) Public, independent government agency. —Some competition with private sector. —Aims to be financially self-sustaining. Private. Both COFACE and BFCE have recently been privatized. Government covers deficits incurred on state account activities. —COFACE exercises a dual role by administering export-financing support on behalf of the French government and offering export finance assistance through its own programs. Private consortium. Hermes and C&L Deutsche Revision jointly administer German export finance program on behalf of the state. KfW offers export loans to German exporters. Government covers deficits on state account activities. —Hermes and C&L exercise a dual role by operating the government’s export finance programs and offering export finance assistance privately. Special Section for Export Credit Insurance (SACE) and Central Institute for Medium Term Credits (Mediocredito Centrale) Public agencies. —Some competition with private sector as Mediocredito Centrale also functions as commercial bank. Public. JEXIM is a independent government agency. EID-MITI is housed in Japan’s Ministry of International Trade and Industry. — JEXIM aims to supplement and encourage commercial bank financing but not compete with it. —EID-MITI competes with private sector providers. Export Credits Guarantee Department (ECGD) Public, independent government department. —Short-term business was privatized. —Has a specific mandate to break even financially. COFACE = Compagnie Francaise d’Assurance Pour Le Commerce Exterieur. BFCE = Banque Francaise du Commerce Exterieur. ($979) ($716) ($934) ($503) Hermes/C&L and Deutsche Revision ($1,985) ($1,501) ($1,821) ($822) ($80) ($113) N/A = Not available. Note 1: There are several caveats with regard to how the numbers in this table should be interpreted. The type and nature of each nation’s export credit agency (ECA) business varies in ways that ultimately influence its costs. In the case of Japan’s Export-Import Bank, 44 percent of its fiscal year 1995 commitments were for loans not “tied” to Japanese exports, 37 percent were for overseas investment loans, and 8 percent for import loans. Only 11 percent of JEXIM’s total financing in that year was reported to have been used for export loans. Where there are two ECAs, we have combined financial results. Note 2: Negative amounts indicate a deficit. Positive amounts indicate a surplus. The figures for the Eximbank represent the credit subsidy obligation and administrative costs obligated for the fiscal year. Canada’s EDC reported net income of $171 million, $44 million, and $112 million in 1994, 1995, and 1996, respectively. However, these amounts do not include the support separately provided through the Canadian national interest account ($200 million in 1996 but not available for 1994 and 1995). EDC conducts a significant (42 percent) level of business with Organization for Economic Cooperation and Development (OECD) nations, which influences its profitability. The totals for Germany include interest revenues from debt reschedulings. The Japanese fiscal year ends March 31. The figures for Japan’s EID-MITI include direct transfers from the Ministry of Finance for Paris Club debt writeoff of $272 million in fiscal year 1994 and $233 million in 1995. The United Kingdom’s fiscal year ends March 31. ECGD figures include amounts spent on foreign exchange insurance and interest rate subsidies. For the last 3 fiscal years, the Eximbank has had the authority to finance exports of nonlethal defense items whose primary end use is for civilian purposes. The Eximbank is authorized to use up to 10 percent of its annual commitments to finance the exports of these dual-use (military and civilian) items. As depicted in tables VI.1 and VI.2 the Bank has financed several items but well below the 10-percent annual cap. The Eximbank’s authority to finance these items expires on September 30, 1997. We are required to report to the Congress no later than September 1, 1997, on the end use of these items. We plan to issue a report to the Congress on this matter by late July 1997. Cap on dual-use commitments (10% limit of total projected commitments) Romantsa (Romanian Air Traffic Administration Services) Ex-Im Bank’s Retention Allowance Program (GAO/GGD-97-37R, Feb. 19, 1997). Export Finance: Federal Efforts to Support Working Capital Needs of Small Business (GAO/NSIAD-97-20, Feb. 13, 1997). Export-Import Bank: Options for Achieving Possible Budget Reductions (GAO/NSIAD-97-7, Dec. 20, 1996). Retention Allowances: Usage and Compliance Vary Among Federal Agencies (GAO/GGD-96-32, Dec. 11, 1995). Export Finance: Comparative Analysis of U.S. and European Union Export Credit Agencies (GAO/GGD-96-1, Oct. 24, 1995). Export Promotion: Rationales for and Against Government Programs and Expenditures (GAO/T-GGD-95-169, May 23, 1995). International Trade: U.S. Efforts to Counter Competitors’ Tied Aid Practices (GAO/T-GGD-95-128, Mar. 28, 1995). International Trade: Combating U.S. Competitors’ Tied Aid Practices (GAO/T-GGD-94-156, May 25, 1994). Export Finance: Challenges Facing the U.S. Export-Import Bank (GAO/T-GGD-94-46, Nov. 3, 1993). Export Finance: The Role of the U.S. Export-Import Bank (GAO/GGD-93-39, Dec. 23, 1992). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed issues concerning the reauthorization of the U.S. Export-Import Bank (Eximbank), focusing on: (1) the rationale for and against the Eximbank's programs; (2) the ways in which its assistance is distributed; and (3) foreign competitors' export finance programs. GAO noted that: (1) in reviewing the Eximbank's export finance programs, the Congress needs to weigh the benefits to the U.S. economy of the Eximbank's programs against their costs; (2) while there are numerous arguments for and against government export financing programs, the most compelling case for these programs appears to be in helping to "level the international playing field" for U.S. exporters and providing leverage in trade policy negotiations to induce foreign governments to reduce and ultimately eliminate such subsidies; (3) during fiscal years (FY) 1994 to 1996, the top 15 users (lead U.S. exporters or contractors) of Eximbank financing accounted for about 38 percent of the value of Eximbank's financing commitments; (4) during the same period, the Eximbank also reported that 20 percent of its assistance went to support small business; (5) the Eximbank believes that these small business transactions would not otherwise have been financed by private lenders; (6) in geographical terms, China, Indonesia, Mexico, Trinidad and Tobago, and Brazil were Eximbank's top markets in FY 1996; (7) the six major industrialized countries GAO reviewed all maintain various types of export finance assistance programs; (8) although considerable differences exist among these programs, they all help exporters in competing for market share in developing markets by providing varying types of financial assistance (loans, guarantees, and insurance); (9) the Eximbank provides similar types of assistance and also administers a tied aid capital projects fund (also known as the "war chest") as part of its programs; (10) tied aid is concessionary (low interest rate) financing that is linked to the procurement of goods and services from the donor country: (11) the war chest is designed to counter other countries' trade-distorting tied aid practices; (11) Eximbank's assistance programs have cost the U.S. taxpayers about $4 billion over the last 5 years; (12) the Eximbank's programs require substantial levels of taxpayer support and the U.S. government's ultimate objectives continue to be aimed at reducing and eliminating such export financing subsidies and allowing exporters to compete on the basis of price, quality, and service, not subsidized financing; (13) the U.S. government needs to make renewed efforts to use international forums such as the Organization for Economic Cooperation and Development to reduce and eventually eliminate such subsidized export finance programs; and (14) given the growing importance of exports to national economic performance, achieving the objective of eliminating all financial subsidies may prove difficult.
Until the 1800s, America’s schools were mainly private local entities. In the mid-1800s, several states rewrote their constitutions to create statewide public education systems and establish government responsibility for financing schools. Today, all states have constitutional provisions on free public education, and, based in part on these provisions, a number of state courts have ruled that education is a fundamental right subject to equal protection under the law. The largest single federal elementary and secondary education grant program is title I of the Elementary and Secondary Education Act. The program, which began in 1965, continues to focus on providing compensatory services to educationally disadvantaged children through categorical, program-specific grants. The fiscal year 1997 appropriations for title I compensatory education for the disadvantaged was $7.7 billion. Federal aid, however, only provides about 7 percent of the funding for elementary and secondary education. Nationwide, the other 93 percent is about evenly split between state and local funding, although the state share of total (state and local) funding for education varies by state. Although states have increased their control over schools, state contributions in the 1991-92 school year varied from 8 percent of total funding in New Hampshire to 85 percent of total funding in New Mexico. States’ ability to fund education also varies. States with higher income levels can provide more funding for their students. In the 1991-92 school year, states’ average income per weighted pupil ranged from $41,385 in Utah to $160,761 in New Jersey. States also vary in the number of students with additional educational needs, such as poor or disabled students, who tend to have education costs higher than average. For example, the student poverty rate among states in 1989 ranged from about 33 percent in Mississippi to under 8 percent in New Hampshire. In addition, localities’ ability to raise revenues varies widely. Localities raise revenues primarily through property taxes and, to a lesser extent, through local sales and income taxes. However, a heavy reliance on local property taxes as a major source of school revenue has produced funding disparities because school districts’ property tax bases vary widely. Localities with low property values usually have low funding per pupil even with high tax rates; localities with high property values have high funding per pupil even with low tax rates. Since the late 1960s, the funding gaps arising from the continued reliance on local tax revenues have led to litigation challenging the constitutionality of state school finance systems, with varying results. Researchers concerned about the equity of school finance systems—that is, the distribution of education funding—have focused on two important definitions of equity: vertical equity and fiscal neutrality. Vertical equity recognizes that legitimate differences occur among children and that some students, such as those who are disabled, have low academic achievement, or limited English proficiency, need additional educational services. After adjusting the pupil count to give greater weight to those pupils who need extra educational services and adjusting the funding for cost differences in educational resources, some experts would argue that funding per weighted pupil should be nearly equal among districts. Fiscal neutrality asserts that no relationship should exist between educational spending per pupil and local district property wealth per pupil (or some other measure of fiscal capacity). That is, the quality of education should be a function only of the entire state’s wealth, not of a locality’s. Unlike vertical equity, which calls for nearly equal funding per weighted pupil among districts after adjustments have been made, fiscal neutrality allows for differences in funding as long as they are not related to the districts’ taxable wealth. In addition to equity, researchers are also concerned about the adequacy of educational resources. Education funding is termed adequate if it enables each student to achieve some minimum level of academic performance. Not much is known, however, about the level of funding needed to achieve a certain level of performance. As a result, determining an adequate level of funding for a district is difficult. In response to legal and political pressures, states have sought to equalize—that is, compensate for the differences in—districts’ abilities to raise revenue for funding education. In general, states have used one or both of the following equalization strategies: added new state or local money to the school finance system to increase funding for poor districts or redistributed the available funding to poor districts by modifying school finance formulas. Redistributing education revenues may also include recapturing the local revenues raised above an established level in wealthy districts and giving them to poor districts. One of the more common funding formulas used to equalize the ability of districts to raise education revenues is the foundation program. A foundation program sets an expenditure per pupil—the minimum foundation—at a level that would provide at least a minimum-quality education for every pupil. Usually, districts must put forth a minimum local tax effort to receive state aid, which makes up the difference between what localities raise by the required local tax effort and the foundation amount. This funding formula results in states targeting more state education funds, on a per pupil basis, to those districts with low tax bases than to those with high tax bases. Despite the seeming simplicity of this funding formula, equalizing school finance systems is a complex and difficult undertaking. In a recent report, we reviewed the experiences of three states that had used one or both of the equalization strategies noted above. Although these states reported reduced funding gaps, their legislative solutions reflected citizens’ concerns about increased taxes to raise more state revenues and concerns of wealthy districts that wanted to maintain existing spending levels. Although most states pursued strategies to supplement the local funding in their poorest districts, the strategies generally did not offset the advantage of wealthy districts in raising local funds. These results occurred even after adjusting for the geographic differences in education costs and student needs within each state. In most states, the total funding per weighted pupil in districts was still largely determined by districts’ income per weighted pupil. In other words, these states had not achieved an income-based fiscal neutrality in their school finance system. On average, wealthy districts had about 24 percent more total funding per weighted pupil than poor districts. Figure 1 ranks states according to the extent to which total funding of school districts in school year 1991-92 was linked to district income. In this figure, the center line, which equals a fiscal neutrality score of 0, represents the goal of ensuring that education funding is unrelated to differences in district income per weighted pupil. The figure shows that the total funding of districts in 37 states favored wealthier districts; that is, the total funding increased as the income of the district increased. In three states the opposite occurred—the total funding decreased as district income increased. Among the 37 states whose school funding favored wealthier districts, the amount of funding available as district income increased varied widely. At the high end of the 37 states, students in Maryland had about $25 more in total funding for a $1,000 increase in income per weighted pupil above the state average. At the low end, students in Washington had only about $4 more for a $1,000 increase in income per weighted pupil above the state average. Three key factors affected the size of the funding gap between poor and wealthy districts. Two of these—targeting of state funds to poor districts and the state’s share of overall education funding—represent states’ school equalization policies. The third factor—the relative local tax effortof poor districts to wealthy districts—stems mainly from choices made at the local level. In general, increases in any one of these decreases the funding gap between poor and wealthy districts. Nationwide, the three factors accounted for 61 percent of the variation in the income-related funding gap. Of the three factors, targeting was the least important in explaining the variation in funding gaps between wealthy and poor districts. The state’s share of total funding accounted for more of the variation in the income-related funding gap than targeting. The relative local tax effort of poor districts to wealthy districts accounted for most of the variation (see app. III). State targeting efforts typically helped to reduce but did not eliminate the gap in total funding between wealthy and poor districts. These results occurred even after adjusting for geographic differences in education costs and student need. For example, Connecticut’s wealthy districts had over three times the amount of local funding as its poor districts in school year 1991-92 (see table 1). In contrast, the state funding was over three times higher in poor districts compared with wealthy districts; the wealthy districts still had, however, about 34 percent more total funding per weighted pupil than the poor districts. In Connecticut, the gap in total funding between the poor and wealthy districts was $2,559. Appendix III provides similar data for all states. Like Connecticut, most states (33 of 49) targeted more state funds to poor districts to some degree on the basis of district income. Of the remaining 16 states, 14 provided approximately equal state funding to poor and wealthy districts. Two states—Louisiana and North Dakota—provided more state funding to wealthy districts than to poor districts. Among the states that targeted more funds to poor districts, the additional amount of state funding varied widely. For example, for a $1,000 decrease in district income below the state average, Nevada provided about $42 more in state funding per weighted pupil; Indiana provided about $6 more in state funding per weighted pupil. Appendix V provides information on all the states’ targeting efforts. A high state share of total education funding offsets income-related funding gaps, even if the targeting effort is low. For example, Washington had virtually no targeting effort but funded about 75 percent of the total funding for education. The poorest districts in Washington had only 4 percent less ($229) to spend per weighted pupil than the wealthiest districts. In contrast, Michigan had a relatively high targeting effort but funded only about 33 percent of the total education funding in the state, which was relatively low. As a result, the poorest districts in Michigan had 36 percent ($1,923) less to spend per weighted pupil than the wealthiest districts (see figs. 2 and 3). Appendix V provides information on the state share for all states. The willingness of poor districts to tax themselves at a higher rate than wealthy districts helped reduce the funding gap between poor and wealthy districts. In 35 states, poor districts made a higher tax effort than wealthy districts. The tax effort is defined as the ratio of district local funding to district income. Poor districts must make a higher level of tax effort to finance comparable education programs because the same tax effort generates less revenue in poor districts than in wealthy districts. For example, Kansas and Pennsylvania each targeted additional funds to poor districts to about the same extent and funded about the same share of total education funding. Kansas’ poor districts, however, taxed themselves about 24 percent more than the state’s wealthy districts, while Pennsylvania’s poor districts had about the same tax effort as its wealthy districts. As a result, the gap in total funding between poor and wealthy districts was smaller in Kansas than in Pennsylvania (see fig. 1). To determine the effects of state school finance policies on the funding gap between poor and wealthy districts, we analyzed states’ school finance data. We developed a new equity measure, implicit foundation level, which indicates the extent to which these policies enable districts to finance a minimum quality education for each student with an equal tax effort. Then we compared this level to the state average to determine states’ equalization efforts. This section describes how we developed these two measures. We determined the combined effects of state equalization policies (targeting and state share), while excluding the effects of local tax effort. To accomplish this, we viewed each state as if it were distributing state funds according to a foundation program. In such a program, the state ensures all districts the ability to finance a foundation or a minimum amount of funding per pupil, provided that the districts make a minimum local tax effort. Using a foundation funding model and assuming all districts made an equal local tax effort, we estimated the implicit foundation level that each state’s equalization policies in school year 1991-92 could have supported. This implicit foundation level is an estimate of the minimum amount of total funding that states’ districts could spend per student if districts were to make an equal minimum local tax effort.This new measure, for the first time, allows analysts to examine the extent to which the funding gap between poor and wealthy districts is due to state equalization policies (state share and state targeting) and the extent to which it is due to local policies (relative differences in local tax efforts). Appendix IV explains how we developed the implicit foundation level. Figure 4 illustrates the implicit foundation level using a hypothetical example of two districts in a state, one poor and one wealthy. For each district, we graphed how much total funding per weighted pupil is associated with a given level of tax effort. Since poor districts generally receive more state funding per weighted pupil than wealthy districts, in this example we assigned the poor district $2,500 in state funding per weighted pupil, twice the amount the wealthy district was assigned. Therefore, the line for the poor district starts out higher (the district has more state money) on the graph than the line for the wealthy district (which has less state money). As figure 4 shows, as both districts increase their local tax effort, the wealthy district raises more local revenue than the poor district for a given level of tax effort. For any given tax effort past a certain point (where the lines cross on fig. 4), the wealthy district’s local revenue more than offsets the additional state money that poor districts receive—therefore, the total funding in wealthy districts exceeds total funding in poor districts. The point at which the total funding lines cross is the implicit foundation level and is the only point at which the two districts have the same amount of total funding for the same tax effort. We compared states’ implicit foundation levels with the maximum foundation levels that would be possible given each state’s amount of total funding devoted to education. We call this ratio a state’s equalization effort. State average funding per weighted pupil is actually the maximum foundation level (see app. IV for a mathematical explanation of this). A state’s equalization effort is a measure of the extent to which districts in a state can finance the state average with an average tax effort. To achieve the maximum foundation level without changing the total funding for education, a state could increase its effort to target funds to poor districts or increase the state’s share of education funding or both. States’ implicit foundation levels varied widely, averaging $3,134 per weighted pupil, with levels ranging as low as $721 in New Hampshire to as high as $5,415 in Alaska in school year 1991-92. In line with the purpose of foundation programs, these implicit levels indicate the extent to which states’ school finance policies ensure a level of funding assumed adequate for districts to finance at least a minimum quality education for every student with an equal local tax effort. Appendix V provides information on the implicit foundation levels in each state. States’ equalization efforts also varied. Only one state—Nevada—made the maximum equalization effort given the total funding available for education in the state. As a result, Nevada’s state school policies in school year 1991-92 enabled each district to spend the state average on each student with an average tax effort. The implicit foundation levels in the other 48 states were less than their state averages, with equalization efforts ranging from about 87 percent (Arkansas and Kentucky) to about 13 percent (New Hampshire). In 14 states, the implicit foundation level was less than half the state average. Figure 5 summarizes the states’ equalization efforts in school year 1991-92. State equalization efforts, representing the combined effects of state targeting and state share, have an important effect on reducing the funding gap between poor and wealthy districts. When we controlled for the differences in the tax effort of wealthy and poor districts in each state, we found that states with higher equalization efforts tended to have smaller funding gaps between poor and wealthy districts, as measured by their fiscal neutrality scores (see app. V). However, differences in the tax effort of wealthy and poor districts still accounted for more of the variation in income-related funding gaps than did states’ equalization efforts. That is, states’ finance policies, as measured by their equalization efforts, helped to reduce the funding gap between poor and wealthy districts, but differences in the tax effort of these districts continued to be the more important determinant of the funding gap. For example, Maryland had an above average equalization effort (about 63 percent), yet it also had the largest income-related funding gap (see fig. 1). This large gap can be explained in part by the relative local tax effort: wealthy districts in Maryland made a tax effort that was about 53 percent higher than the tax effort of poor districts, the highest such ratio in the nation. Thus, despite Maryland’s substantial efforts to equalize funding, the effort did not overcome the differences in local funding by district that were due, in part, to the relatively high tax effort of wealthy districts (see app. III). To further reduce the funding gap between poor and wealthy districts, states would need to increase their equalization effort by either increasing their share of total funding, increasing their targeting effort to poor districts, or increasing both. To illustrate the extent of the change that would be needed to maximize a state’s equalization effort without any increase in state funding, we analyzed state targeting in school year 1991-92, while holding the state share constant and assuming all districts made an equal tax effort. Under this scenario, 48 states would have had to reduce their funding of wealthy districts to increase their funding of poor or middle-income districts or both. In many states, the magnitude of the targeting change would have had to be significant to enable districts to spend the state average with an average tax effort. Relative to the distribution needed to attain the state average for all students, 29 states would have had to significantly shift their funding from wealthy districts to poor or middle-income districts or both (see table 2). Detailed information on state equalization policies and changes in state funding needed to enable districts to spend the state average for each student with an average tax effort appears in the state profiles in this report (see apps. VII through LV). Each profile provides information on (1) the actual state and local funding distribution to districts in school year 1991-92 for districts in five groups of approximately equal student population, according to increasing district income, and (2) how funding would have been distributed among these groups if each district could have financed the state’s average total funding per weighted pupil with an average tax effort. We contacted state education officials to determine the extent to which the states had changed their targeting effort and state share between school years 1991-92 and 1995-96. Twenty-five states reported making little or no changes to their targeting effort or state share. The remaining 24 states reported making targeting changes that may have increased their implicit foundation levels. For example, education officials in Missouri said that changes implemented in 1993 had increased targeting to low-wealth districts and that the state’s new formula provides more state funding to districts with both lower property wealth and higher tax efforts. Six of the 24 states also reported making increases of 10 percentage points or more in their state share of education funding: Tennessee (10), Colorado (11), Kansas (18), Utah (24), Oregon (30), and Michigan (45). In some cases, lawsuits challenging the constitutionality of a state’s school finance system have prompted changes in targeting or state share. For example, one lawsuit alleged that Tennessee’s school finance system resulted in inequalities that violated the state constitution, and the state has since significantly revised its system. Appendix LVI summarizes the changes states have made between school years 1991-92 and 1995-96. Of the 10 states noted in table 2 requiring the largest shifts in state funding to poor districts, 5 reported making changes that provided more or much more state funding to low-wealth districts than in school year 1991-92. The other five states reported making little or no changes to their school finance system by school year 1995-96. Recognizing the struggle of poor districts to adequately fund the education needs of their students, states have used several strategies to reduce the funding gap between poor and wealthy districts. States that want to further reduce the funding gap between poor and wealthy districts would have to continue to increase the state share of total funding, increase their targeting effort to poor districts, or increase both. If targeting is increased, poor and middle-income districts would receive more state funding, while wealthy districts would receive less state funding. States may also increase their state share of education funding. A higher state share can offset income-related gaps even if the targeting effort is low, according to our analysis. However, making such changes may be difficult because of taxpayer concerns. Decisionmakers and others can use the measures in this report— particularly the fiscal neutrality score, implicit foundation level, and equalization effort—to assess the equity effects of current and proposed changes in state school finance policies. In addition, the implicit foundation level, when compared to a standard like the state average, can be used as a measure of the adequacy of funding provided by a state’s school finance system. Moreover, these measures can be used to assess progress over time in achieving more equity in school finance systems within states. The Department of Education reviewed a draft of this report and had no comments. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to appropriate congressional committees and all members of the Congress, the Secretary of Education, and other interested parties. Please contact me on (202) 512-7014 or Eleanor L. Johnson, Assistant Director, on (202) 512-7209 if you or your staff have any questions. GAO contacts and staff acknowledgments appear in appendix LVII. The objectives of this study were to determine (1) the size of the gap in total (state and local combined) funding between poor and wealthy school districts for each state, (2) the key factors that affect the size of states’ funding gaps, and (3) the effect of states’ school finance policies on the funding gaps. To help answer these questions, we used school year 1991-92 district-level data from the Department of Education, the most recent available, and supplemented these data when key data were missing. We used standard school finance measures and developed a new method to measure the effect of state policies on the funding levels of school districts. We supplemented our analysis by contacting education officials in the states to determine the extent to which a state’s school finance system had changed since school year 1991-92. For this study, we conducted a district-level analysis of all states except Hawaii. We wanted our analysis to examine state funding for regular school districts with students in grades kindergarten to 12, so the analysis excluded administrative districts and districts serving unique student populations, such as vocational or special education schools. Our analysis also excluded a number of small districts that had extreme outlying values of income per pupil. Finally, we excluded districts that lacked data for critical variables, such as poverty level. The 2,235 districts excluded from the analysis had a total enrollment of 335,558. The final database used in our analysis of the 49 states contained 14,425 districts with a total of 41,204,610 students, representing 99.2 percent of the students in 49 states. This study was based mainly on revenue and demographic data obtained from the Department of Education’s Common Core of Data (CCD) for the 1991-92 school year, the most current data available for a national set of districts. Data for the CCD were submitted by state education agencies and edited by the Education Department. We obtained district per capita income and population data directly from the 1990 census because they were not available in the CCD. For variables in our analysis that had missing or incomplete data, we obtained the data directly from state education offices. For example, we obtained district-level data for disabled students for school year 1991-92 directly from the state education offices for nine states because the CCD either did not report the number of disabled students in the states or reported a number substantially different from one reported by another Education Department source. We made further edits on the basis of consultations with Department of Education experts. In some cases, we imputed critical data when they were missing and not available from other sources. We imputed income per pupil data for 199 districts in California because the per capita income data needed to compute this variable were not reported by these districts. We also imputed cost index data for 310 districts, including 18 in Alaska and 72 in New York (mainly Suffolk County). The imputation method we used to impute cost index data was based on the recommendation of the school finance expert who developed the cost index. We conducted structured telephone interviews with state school finance officials to determine the extent to which states had changed their school finance systems since school year 1991-92. We did not, however, verify the accuracy of the officials’ statements. To measure the size of the gap in total funding between poor and wealthy districts, we used the elasticity of total (state and local) funding in a district relative to district income, a measure of a district’s ability to raise revenue for education. In a regression model, we used dependent and independent variables that were adjusted for differences in geographic cost and student need within the state and put into index form (see app. II). A district’s total funding per weighted pupil was the dependent variable; a district’s income per weighted pupil was the independent variable. Each observation was weighted by the district size to allow districts with larger enrollments to have a greater effect on the results. Appendix III describes this process in detail. To determine the relationship between the total funding gaps and the key factors affecting the size of the gaps, we conducted a regression analysis using a state’s fiscal neutrality score (the elasticity of total funding to district income) as the dependent variable and the following as independent variables: a state’s share of total funding, a state’s targeting effort (described in this app.), and a state’s relative local tax effort (the elasticity of local tax effort relative to district income—see app. III). To measure the extent to which states targeted their education funds to poor districts, we estimated the elasticity of state funding in a district relative to district income. Using a regression model, we defined the dependent variable as a district’s state funding per pupil and the key independent variable as a district’s income per pupil. Both variables were adjusted for differences in geographic cost within the state (see app. II). To control for student need and economies of scale, we included four additional independent variables: poor students, disabled students, high school students, and district size. All variables in the analysis were put into index form and were included in the regression. Each observation was weighted by the district size to allow districts with larger enrollments to have a greater effect on the results. We set certain constraints on the regression coefficients. The resulting regression coefficient of the income per pupil variable is our measure of a state’s targeting effort and measures the elasticity of state funding relative to district income. Appendix V describes this methodology in greater detail. We developed an equity measure—implicit foundation level—to assess the state policies (targeting and state share) that affect the funding gap between wealthy and poor districts. We calculated this measure using a formula involving a state’s share of total funding, a state’s targeting effort, and a state’s average total funding per weighted pupil. To calculate the targeting effort in this formula, we used the same multivariate linear regression as the one already described, except we imposed the restriction that the income per pupil variable have a nonpositive coefficient. Appendix IV explains the theory behind the equity measure we developed, and appendix V explains the regression. Appendixes VII through LV provide profiles of each state’s school finances in school year 1991-92. The profiles provide summary information on the total funding per weighted pupil, states’ share of education funding, states’ targeting effort, implicit foundation level, equalization effort, and fiscal neutrality score. To report the state profiles for school year 1991-92, we ranked each state’s districts according to increasing district income and then divided the districts into five groups, each with about the same number of students. We then calculated the mean state, local, and total funding per weighted pupil for each group. These funding figures were also adjusted for differences in geographic costs within the state (see app. II). Appendix VI provides an overview of the state profiles. Because we relied on state and local funding data from the 1991-92 school year, we telephoned state school finance officials to determine what changes had occurred in the school finance systems from school years 1991-92 through 1995-96. We specifically asked about changes in targeting that would affect low-wealth districts and changes in the state’s share of total funding. Appendix LVI presents interview results. Education costs vary by school district in a state (and nationwide) because of geographic differences in the cost of educational resources and in the number of students with special needs. The cost of educational resources may vary across districts for several reasons. For example, a district may be able to hire a teacher of a given quality at a lower rate than other districts because the district may have a lower cost of living or offer certain amenities or working conditions that are more attractive to teachers than the other districts. Also, districts with either large or small student populations may face higher costs than other districts because of the diseconomies of scale that can occur in providing services at these levels. The cost of educating students also varies for a number of other reasons. Districts with high proportions of students with special needs, such as the disabled, the poor, and those with limited English proficiency, generally have higher education costs than average because such students require additional educational services. Furthermore, districts that largely serve high school students tend to have higher per pupil education costs than those that largely serve elementary students. As discussed in our previous report on equity measures, when estimating comparable measures of funding levels or disparities among districts, accounting for districts’ differences in educational resource costs and student needs is useful. This appendix discusses how we made these adjustments in our study. To adjust for geographic differences in resource costs by district, we used a national district-level teacher cost index recently developed for the National Center for Education Statistics (NCES). Although an index that examines differences in the cost of living is available by district, the NCES teacher cost index is better suited to comparing districts by considering the purchasing power of districts in determining personnel-related costs, a major cost to school districts. Our focus is on a district’s ability to provide comparable educational services to its students, rather than on whether teachers’ salaries are adequate given the cost of living in their area. Not all costs, however, vary within a state. For example, the cost of books, instructional materials, and other supplies and equipment tends to vary little within a state or, for some items, the nation. Therefore, we used the teacher cost index only to adjust the 84.8 percent of current expenditures estimated to relate to personnel costs, including salaries, fringe benefits, and some purchased services. Finally, we rescaled the NCES teacher cost index to create district-level indexes for each state that reflect the education resource cost differences in just one state rather than the differences nationwide. To rescale the teacher cost index, we determined the average teacher cost index for the state, then divided each district’s teacher cost index by the state average to obtain the district-level teacher cost index adjusted for within-state differences. A teacher cost index equal to 1.0 indicates a district with average resource costs for the state. Table II.1 provides the average cost index for each of the five income groups of districts in a state. In all states except four (Alaska, Nevada, New York, and North Carolina), the range in the average cost indexes across groups in the table was less than twice the standard deviation of the district-level cost index. This suggests that states may have had more variation in cost differences among individual districts than across the income groups shown in the table. To account for the differences in student need by district, we made adjustments that weighted poor students and disabled students according to their need for additional services. Our analysis did not account for limited English proficient students, generally recognized as a third group of high-cost students, because we could not obtain accurate district-level data on the number of such students. To account for differences in student needs by district, students with disabilities were given a weight of 2.3 because the cost of educating such children is generally 2.3 times the cost of educating children who do not need special educational services, although the cost of educating children with specific types of disabilities varies widely. We also assigned a weight of 1.2 for children from poor families. This additional .2 weighting for poor students stems from an estimate based on the average title I allocation per student divided by average funding per student. We used a set of weights developed for an NCES report. Using these weights, we developed a district-level need index adjusted for differences within the state. We used the following equation to calculate the need index for each district: where AdjMem = adjusted membership; a district’s fall membership + (1.3 x students with Individual Education Plans) + (.2 x students below the poverty line) AdjStMem = adjusted membership in a state; the sum of AdjMem for all districts in a state Member = membership; a district’s fall membership StMem = state membership; the sum of Member for all districts in a state. Table II.2 provides the average need index for each of the five income groups of districts in a state. In all states except three (Alaska, Maryland, and New Mexico), the range in the average need indexes across groups in the table was less than twice the standard deviation of the district-level need index. This suggests that states may have had more variation in need differences among individual districts than across the income groups shown in the table. Group 1 Group 2 Group 3 Group 4 (continued) Group 1 Group 2 Group 3 Group 4 Nevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. Group 1 Group 2 Group 3 Group 4 (continued) Group 1 Group 2 Group 3 Group 4 Nevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. In our study, the goal of fiscal neutrality is achieved in a state when total (state and local) funding per weighted pupil does not depend on differences in districts’ income per weighted pupil. We measured the extent of this dependency using the income elasticity of total funding per weighted pupil and defined this elasticity as a state’s fiscal neutrality score. A positive fiscal neutrality score would indicate that per pupil funding rises with income; a fiscal neutrality score of 0 would indicate that fiscal neutrality has been achieved (that is, no relationship exists between per pupil funding and per pupil income); and a negative score would indicate higher funding in low-income districts. The first section of this appendix presents the method we used to estimate each state’s fiscal neutrality score and the results of our analysis. The second section shows how the variation in fiscal neutrality scores among states is explained by differences in state equalization policies (state share and state targeting) and by differences in the relative local tax effort of wealthy and poor districts. We used a linear regression model to estimate the elasticity of total funding in a district relative to district income. Both the dependent and independent variables were adjusted for differences in geographic cost and student need within the state and expressed as a percent of their respective state averages. By expressing each variable as a percent of its state average value, both the dependent and independent variables can be interpreted as index numbers. A value below 1.00 signifies that a district was below the state average for that variable; a value above 1.00 signifies that a district was above the state average. With these adjustments the regression model took the following form: Because both variables are measured relative to their respective state averages, the regression coefficient (b ) represents the percent difference, from the state average, in total funding relative to a percent difference, from the state average, in district income. This is precisely the elasticity we wanted to estimate and use as our fiscal neutrality score. A positive coefficient implies that total funding per weighted pupil is higher in wealthy districts, and a negative coefficient, the opposite. A coefficient that is not statistically different from 0 implies that fiscal neutrality has been achieved because no systematic differences exist in per pupil funding between wealthy and poor districts. We used a district’s total funding per weighted pupil as the dependent variable. This variable included state and local funding for all purposes, including maintenance and operations, transportation, and capital expenditures and debt service. We divided the district’s total funding by its fall membership to put the variable in per pupil form. We used district income per weighted pupil as the independent variable, our measure of a district’s ability to raise revenue for education. Because we could not develop income per pupil data from the Common Core of Data (CCD), we used district-level per capita income from the 1990 census to construct the variable. We multiplied per capita income in a district by district population, resulting in the total income in the district. We then divided this amount by the total number of students in the district, resulting in income per pupil. Most school finance studies measure a district’s ability to raise revenue for education as district wealth defined as property value per pupil. However, we chose to use district income defined as resident income per pupil because we could not construct a property-value-per-pupil measure at the district level from the national databases that were available. Furthermore, beyond the field of school finance, income—as opposed to wealth—is the most commonly accepted measure of the ability to raise revenue. A good income measure of a district’s ability to raise revenue for education should be as comprehensive as possible. For example, the Department of Treasury defines and compiles the total taxable resources (TTR) for each state. TTR takes into account all income either received by state residents or produced in a state. Either income measure, by itself, is incomplete. Income received by state residents does not include business income earned by nonresidents (undistributed corporate profits, for example). Alternatively, income produced does not include income earned by residents from out-of-state sources (residents who work out of state, for example). Consequently, TTR includes both income received and income produced to gauge a state’s total taxable resources. Unfortunately, a comprehensive income measure such as the TTR is not available at the school district level. Our income measure is money income reported in the 1990 census. Its major weakness is that it does not include commercial or nonresident income that local school districts may be able to tax. It may therefore understate the ability of districts with high concentrations of this type of income to raise revenues for education. However, our measure does include the largest income category—resident income—represented in TTR. Although we would expect some differences in the results of our analyses if all income from commercial and industrial property had been included in the income variable, the general trends from our analyses would still have held true. Finally, the regression model in equation III.1 was estimated by weighting each observation for membership size to better reflect the distribution of state funding to students rather than to districts; thus, school districts with larger enrollments had a greater effect in determining the estimated coefficients of the model. In most states, total funding per weighted pupil increased as district income increased (the elasticity was positive). On average, wealthy districts had about 24 percent more total funding per weighted pupil than poor districts. In 37 states, the income elasticity of total funding per weighted pupil was positive. This means that as the districts’ income increased, the level of total funding increased. However, the range in elasticity varied among the states, with a high of .469 in Maryland and a low of .055 in Washington. In three states—Alaska, Nevada, and Oklahoma—the elasticity was negative, that is, total funding decreased as district income increased. Elasticities for these three states ranged from –.556 in Nevada to –.053 in Oklahoma. The elasticity was not statistically different from 0 in the remaining nine states. Table III.1 shows the elasticities of total funding to district income and the R square for each state. Elasticity of total funding to income (continued) In most states, the amount of total funding (state and local funding combined) per weighted pupil available to wealthy districts exceeded such funding available to poor districts. However, states varied widely in the degree to which funding available for wealthy districts exceeded that of poor districts. Table III.2 summarizes the gaps in total funding per weighted pupil between wealthy and poor districts. Tables III.3 and III.4 show the state averages for total funding per weighted pupil and income per weighted pupil as well as the average index numbers of these two variables for each of the five income groups of districts in a state. Wealthy group funding compared with poor group funding(continued) Average total funding per weighted pupil(continued) Average total funding per weighted pupil(continued) Average total funding per weighted pupilNevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. Average income per weighted pupil(continued) Average income per weighted pupilNevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. We identified state share, state targeting, and relative local tax effort as the three key factors affecting the size of school funding gaps between poor and wealthy districts using the following rationale. First, we set aside the effects of state share and state targeting by assuming that states do not fund schools and that funding per pupil depends entirely on the revenue from local tax bases. Under this assumption, the funding gap occurs because wealthy districts can generate more local funding than poor districts when the tax effort for all districts is equal. However, the gap in funding between wealthy and poor districts would grow smaller as poor districts increase their local tax effort relative to wealthy districts. Therefore, in the absence of any state funding for education, the funding gap between poor and wealthy districts would be completely determined by the relative local tax effort of poor and wealthy districts. A state can help offset the funding gap by providing a portion of the total funding and targeting more state funds to poor districts. Consequently, the size of the funding gap between wealthy and poor districts should depend on both state equalization policies (state share and state targeting) and the relative local tax effort of poor districts and wealthy districts. To measure a state’s relative local tax effort, we estimated the income elasticity of local tax effort. For each state, this elasticity measures the percent change in local tax effort associated with a 1-percent increase in district income per weighted pupil. As measured this way, the greater the elasticity, the greater the tax effort in wealthy districts as compared with poor districts. This elasticity is represented by the regression coefficient ) in the following equation: local tax effort index = the ratio of a district’s local funding to its income expressed as a percent of the average tax effort of all districts, represented by the dependent variable above elasticity of local tax effort = a state’s elasticity of local tax effort to income per weighted pupil, represented by b = an error term that reflects the variation in the local tax effort that cannot be accounted for by the other variables in the model. To estimate the extent to which the three factors—elasticity of local tax effort, state share, and state targeting (see table III.6)—accounted for the variation in the funding gap between wealthy and poor districts, we constructed a regression model that used these three factors to explain cross-state differences in fiscal neutrality scores: fiscal neutrality score = a state’s elasticity of total funding per weighted pupil relative to income per weighted pupil state funding percentage = state funding as a percentage of total (state and local) funding state targeting effort = a state’s elasticity of state funding per weighted pupil relative to income per weighted pupil elasticity of local tax effort = a state’s elasticity of local tax effort relative to income per weighted pupil = an error term that reflects the variation in funding gaps that cannot be accounted for by the other variables in the model. The results of this analysis showed that the three factors accounted for about 61 percent of the variation in the income-related funding gaps.Although increases in both state targeting and state share led to decreases in states’ fiscal neutrality scores, state share had a relatively greater impact on reducing income-related funding gaps than did states’ targeting efforts. Increases in the elasticity of local tax effort were associated with increases in the funding gap, meaning that as the wealthy districts’ tax effort increased relative to the poor districts’ tax effort, the income-related funding gap also increased. The elasticity of local tax effort factor of the three factors in this equation accounted for most of the variation in the fiscal neutrality scores (see table III.5). Table III.6 shows the state data used in the regression analysis. State share of total funding (percent) Elasticity of local tax effort(continued) State share of total funding (percent) Another way to illustrate that state equalization policies (state share and state targeting) reduced but did not eliminate the funding gap between wealthy and poor districts is shown in table III.7. In most cases, the addition of state funding to local funding caused total funding to be less sensitive to district income than local funding. This is illustrated by the fact that states’ income elasticities of total funding are usually less than those of local funding. The elasticity of local tax effort accounted for most of the variation in the fiscal neutrality scores. We compared the local tax efforts of poor and wealthy districts in table III.8. In 35 states, poor districts made a higher tax effort than wealthy districts. (continued) –.196 Poor group tax effort compared with wealthy group tax effort(continued) In this study, we developed a new equity measure to assess a state’s equalization policies (state share and state targeting) that excludes the effects of the local tax effort. To accomplish this, we viewed each state as if it were distributing state funds according to a foundation program in which the state ensures a foundation or minimum amount of funding per pupil for a minimum local tax effort. Using a foundation formula and assuming all districts made an equal minimum tax effort, we determined each state’s implicit foundation level given the state’s equalization policies in school year 1991-92. This implicit foundation level is an estimate of the minimum amount of total funds (including both state and local funds) that districts could spend per student given the state’s equalization policies and provided all districts made an equal tax effort. The implicit foundation level identifies a funding level per pupil at which an equal local tax effort would produce equal funding per pupil among all districts in a state. This appendix describes how foundation formulas work and how we calculated three important summary measures for each state: targeting effort, implicit foundation level, and equalization effort. As mentioned, to calculate these three summary measures, we assumed states behaved as if they used a foundation formula to distribute state funds to districts. As will be shown in this appendix, foundation equalization policy can result in states targeting more funds to districts with lower tax bases. Because nearly all states do target more funds to districts with low tax bases, it is reasonable to evaluate school finance policies as if they followed an implicit foundation equalization policy. To model the state targeting needed to enable districts to spend the implicit foundation amount on each student with a minimum tax effort, we used a derivation of the following foundation formula:g = state funding per pupil in a school district e* = the implicit foundation level (including both state and local funds) that results when all districts make an equal minimum tax effort given the state’s equalization policies t* = the minimum tax effort, a ratio of district’s local revenue to district’s tax base value v = the tax base per pupil in a school district. In our study, we used income per pupil. One implication from the above equation is that if a state chose not to target additional funding to poor districts and instead provided the same funding per pupil to all students with no minimum required local tax effort (t*= 0), then the implicit foundation level for the state (e*) would equal the average state funding per pupil. That is, each district’s state funding per pupil (g) would equal the average state funding per pupil (g). Another implication of the equation is that if states require a minimum tax effort (t*) greater than 0, states will have to target more funding to poor districts than to wealthy districts to achieve the same implicit foundation level (e*) for all districts. The implicit foundation level in this instance would be greater than the average state funding per pupil (g) where, without a required local tax effort, no extra state funding is targeted to poor districts. From our analysis of school year 1991-92 school finance data, we know that states do, in fact, vary in the extent to which they target additional funding to poor districts. Consequently, our purpose was to estimate the implicit foundation level that was possible in each state given the degree to which a state targets more funds to poor districts. We have divided the explanation into two parts. First, we explain how state funding would have to be targeted to ensure that all students received the state’s average total funding per pupil, provided that all local districts made an average tax effort. Second, we modify our explanation to allow for state targeting that results in an implicit foundation level that is below the state average with districts making a minimum local tax effort. On the basis of equations developed in this second part, we then describe how we estimated state targeting efforts, implicit foundation levels, and equalization efforts. Given the total amount of funding for education in a state, the maximum foundation level possible in a state is the state’s average total funding per pupil. This means that, in principle, if all districts were to make the average tax effort to finance their local school programs, the state could target its funds to ensure that all districts could fund the average total funding per pupil. To demonstrate this, we began with an equation in which the implicit foundation level equals the state’s average total funding per pupil, and then we modified this equation to show how state funds would have to be distributed. where g = state funding per pupil in a school district e = the state’s average total funding per pupil, which is also the implicit foundation level in the state t = the average tax effort of local school districts v = the tax base per pupil in a school district. The local share of total funding per pupil, by definition, is local funding expressed as a percent of total funding. This is expressed by the following equation: = the local share of the total funding for education in the state v = the average tax base per pupil in the state. Rearranging terms in equation IV.3, we found that the equation for average tax effort of local districts is t=(a e/v). Substituting this equation for t in equation IV.2 and rearranging terms results in the following equation: Equation IV.4 represents how state funding would have to be distributed if all school districts were to finance the state average funding level, provided that districts made an average tax effort to finance their local schools. We chose to measure state targeting by the income elasticity of state funding, where district income represents the tax base per pupil. The income elasticity is the percent difference in state funding that results from a 1-percent difference in district income. We can use the relationship in equation IV.4 to measure this elasticity by dividing both sides of the equation by the average state funding, that is, g=e(1-(a v/v))=(1-a )e. This yields the following equation: g = the average state funding per pupil, (1-a )e. We note that a school district’s relative state funding per pupil (g/g) depends on (1) the relative size of its tax base per pupil (measured as v/v) and (2) the share of education funding financed at the local level (a ) and by implication the share of education funding financed with state funds (1-a ). The slope parameter of equation IV.5 (a /1-a ) can be interpreted as the income elasticity of state funding and represents the state’s targeting effort to achieve the maximum foundation level (providing all districts the capacity to fund the state average funding level with an average tax effort). The relationship also implies that the greater the local share of total funding, and therefore the smaller the state share, the greater the state’s targeting effort must be if it is to achieve the maximum foundation level for all students. Other important implications derive from this relationship: A linear relationship must exist between a school district’s relative state funding per pupil and the relative tax base per pupil. The intercept is the inverse of the state funding percentage (that is, 1/(1-a )). The slope and intercept will always sum to 1 (that is, (1/(1-a ) + (-a /(1-a ))) = 1). Although the state average represents the maximum foundation level possible in a state if all districts were to make an average tax effort, most states’ implicit foundation levels are likely less than the maximum. In this section we develop the state targeting implications that produce an implicit foundation level that is less than the maximum. We assume that all districts make the same minimum tax effort and that the state still funds the same share of total education funding. If the implicit foundation level is less than the state average, it is because the state targets its funds to low tax base districts to a lesser degree than is required to achieve the maximum foundation level. To model this condition, we introduced a new term—the equalizing factor (b )—into equation IV.2. The value of the equalizing factor ranges from 0 to 1. When the equalizing factor equals 1, the state’s targeting effort is at its maximum level. When the equalizing factor equals 0, the state is not targeting funds to poor districts, and every district receives the same state funding per pupil. In this instance, the implicit foundation level is simply the average state funding per pupil. An equalizing factor between 0 and 1 means the state’s effort to target funds to poor districts is less than the maximum. Introducing just the equalizing factor to the equation increases the size of state funding to each district. However, since the total amount of state funding has not changed, we had to introduce a scalar (g ) to ensure that the sum of the state funding is still the same percentage of total funding. The result of introducing these two new variables is shown in equation IV.6: = the equalizing factor, that is, the fraction of the maximum targeting effort that the state undertakes g = a scalar that ensures that the total sum of state funding equals the total amount of state funds available for distribution. The next few equations show that the scalar (g ) depends on the state share of education funding (1-a ) and the equalizing factor (b ). As stated earlier, the total amount of state funding equals the sum of all the districts’ state funding. By multiplying both sides of equation IV.2 by the total number of pupils in a district (P) and summing both sides, we created an equation for the total amount of state funding (G). G = the total sum of state funding available for distribution P = the number of pupils in a district. Because the total amount of state funding (G) available has not changed, it must be true that the sum of total state funding under maximum targeting efforts is the same as when targeting efforts are less than the maximum. This is represented in the following equation: Solving for the scalar (g ) yields equation IV.9: By definition, the sum of (Pe) equals total funding and the sum of (Ptv) equals the total amount of local funding from all school districts. Dividing both numerator and denominator by total funding yields the following equation for the scalar (g ): When the state’s targeting is at its maximum level, then the equalizing factor (b ) equals 1, and the scalar (g ) equals 1. If the state were to provide flat funding per pupil to all districts, no targeting to poor districts would occur, and the equalizing factor (b ) would equal 0 and the scalar (g ) would equal (1-a ), the state’s share of total funding. As discussed earlier, we used the slope of equation IV.5 to determine how much the state would have to target state funding to low tax base districts to achieve an implicit foundation level equal to the state average. Revising equation IV.6 produced a similar equation that shows how much state funding would have to be targeted to low tax base districts to achieve an implicit foundation level below the state average. We modified equation IV.6 by substituting (1-a )/(1-b ) for the scalar (g ) and substituting (a e/v) for the average tax effort (t). Making these substitutions in equation IV.6 and rearranging terms yielded the following equation analogous to equation IV.5: This equation is the basis for running regressions, using actual district data for state funding per pupil (g) and the tax base per pupil (v). The slope (b )) represents the state’s targeting effort. When estimating this/(1-b equation, the slope and the intercept (1/(1-b )) must be constrained so that they sum to 1. After obtaining the regression coefficient for the tax base per pupil, we can solve for the equalizing factor (b ) because the local share of funding (a ) is known. When the state’s implicit foundation level is less than the state average, the state’s equalizing factor (b ) is less than 1 and the state’s targeting effort ((b maximum value (a )) is less than it would be at its /(1-a )). The term representing the implicit foundation level in equation IV.6 equals the scalar (g ) times the state’s average total funding per pupil (e) or the maximum foundation level. Substituting the expression in equation IV.10 for the scalar (g ) in equation IV.6, we expressed the implicit foundation level in terms of the state’s average total funding per pupil, the local share of school funding, and the equalizing factor as follows: Using equation IV.12 and knowing the local funding percentage (a ), the equalizing factor (b ), and the state average funding level (e), we solved for the state’s implicit foundation level. A state’s equalization effort is a ratio of the state’s implicit foundation level to the maximum or average funding level. By rearranging terms in equation IV.12, we showed that a state’s equalization effort, the ratio of the implicit foundation level (e*) to the average funding level (e), equals the ). Therefore, a state’s equalization effort reflects scalar (g ) or (1-a )/(1-b the state’s share of education funding and a state’s targeting effort. Appendix V describes how we used these equations to estimate each state’s targeting effort, implicit foundation level, and equalization effort. This appendix describes the statistical models we used to estimate each state’s targeting effort, implicit foundation level, and equalization effort. It also presents the model results and the index data for some of the model variables. In addition, it explains how the implicit foundation level for each state can be adjusted to facilitate cross-state comparisons. Finally, it describes how states’ estimated equalization efforts and relative local tax efforts can explain the variation in state fiscal neutrality scores. ) can be interpreted as the percent difference in state funding per pupil associated with a 1-percent difference in district income from the state average per pupil income. This, by definition, is the elasticity of total per pupil funding relative to a district’s per pupil income, evaluated at the mean of these variables. = the equalizing factor, that is, the fraction of the maximum targeting effort that the state undertakes v = the tax base per pupil in a school district (in our study, we used income per pupil) v = the average per pupil tax base in the state. In the regression, both the dependent and independent variables were adjusted for differences in geographic cost within the state by applying a district-level teacher cost index to the dollar figures (see app. II). The dependent variable was a district’s state funding per pupil, and the key independent variable was a district’s income per pupil. Our analyses included four other independent variables that controlled for student-need factors that contribute to the cost of education. The first three of these variables relate to the presence of high-cost student groups in a district, and the fourth variable relates to cost differences due to economies of scale. The four variables are the percent of district students who are poor (based on the percentage of children who live in households that were below the poverty level in 1989);the percent of district students who are disabled designated as special education students under the Individuals With Disabilities Education Act (part B) who have an Individual Education Plan; the percent of district students who are high school students (grades 9 to 12); and the total square of district enrollment (membership) on October 1, 1991. We included these control variables in our model rather than use the student need index developed in appendix II because we wanted to account for actual state targeting policies to the extent possible rather than use a uniform measure of student need that may not reflect actual state policy. All variables in the analysis were put into index form. Including all four control variables yielded the following model of state targeting policies: c = a district’s teacher cost index adjusted for statewide differences MEMSQI=a district’s student membership squared as a percent of the district student membership (as a percent of the state average) PovI = the percent of district students below the poverty level (as a percent of the state average) SNI = the percent of district students with an Individual Education Plan (a measure of pupils with special education needs) also measured as a percent of the state average HSI = the percent of district students who are high school students (as a percent of the state average) = error term measuring all other factors affecting the distribution of state funding. Each of the regression coefficients in the model depends on the equalization factor and the local share of education funding (b additional coefficient (b ). An ), unique to each variable, was added so that the regression coefficients added to 1.0, as required by the equalization model (see app. IV). The regression coefficients in the model )) in the )) to (b model, in effect, serves as a control for the membership size of the district. The model in equation V.2 was estimated by weighting each observation for membership size to allow school districts with larger enrollments to have a greater effect on determining the coefficients of the equation. This prevents one or a few small school districts from unduly influencing the estimated coefficients. The results are then more representative of the effect that state funding targeting policies had on students in the state. Because we were estimating the extent to which each state’s funding targeting policy was consistent with providing an implicit foundation level with a minimum tax effort, we also imposed the restriction that the three student-need variables would have non-negative coefficients. We did not specify the direction of the coefficient for the membership squared variable because we did not have an expectation of how a state’s funding targeting policy might reflect economies or diseconomies of size. Because we wanted to determine the actual targeting efforts of states compared with district income, we did not restrict the coefficient for the income per pupil variable, allowing the coefficient to be any sign. We reported state targeting efforts using the income per pupil coefficient obtained from this effort. Table V.1 shows the targeting effort for state funds compared with district income per pupil, the sampling error, and the overall R square. Negative targeting efforts represent more targeting to poor than to wealthy districts; positive targeting efforts represent more targeting to wealthy than to poor districts. A targeting effort of 0 signifies no targeting of state funds to either poor or wealthy districts. Our analysis shows that 33 states targeted more state funds to districts as district income declined. However, the degree of the targeting varied widely, ranging from a high of –1.007 in Nevada to a low of –.099 in Indiana. Fourteen states did not target state funds on the basis of district income—the targeting effort was not statistically different from 0. Two states—Louisiana and North Dakota—provided more state funding to districts as district income increased. The degree to which states targeted state funds on the basis of differences in district income and student need also varied widely. In only 19 states, district income and student need accounted for more than 50 percent of the variation in state funding per pupil as noted by the R squared results. In 3 of the 19 states—Kentucky, Maryland, and Virginia—more than 80 percent of the variation in state funding was explained. In the remaining 30 states, less than half of the variation in state funding per pupil was due to differences in district income and student need. Tables V.2 and V.3 provide the average income per pupil and average state funding per pupil as well as the average index numbers of these two variables according to groups of increasing district income. Tables V.4 to V.7 provide the average index numbers for the four control variables associated with student poverty, disabled students, high school students, and district size according to groups of increasing district income. Sampling error Overall R squared (continued) Average income per pupil index numbers (continued) Average income per pupil index numbers Nevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. Average state funding per pupil (continued) Average state funding per pupil (continued) Nevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. Average poverty rate (percent) (continued) Average poverty rate (percent) Nevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. Average disabled rate (percent) (continued) Average disabled rate (percent) Nevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. Average high school student index numbers Group 1 Group 2 Group 3 Group 4 (continued) Average high school student index numbers Group 1 Group 2 Group 3 Group 4 (continued) Average high school student index numbers Group 1 Group 2 Group 3 Group 4 Nevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. Average membership squared index numbers Group 1 Group 2 Group 3 Group 4 (continued) Average membership squared index numbers Group 1 Group 2 Group 3 Group 4 Nevada was divided into only four groups because of the distribution of the student population. The wealthiest group is group 4. In appendix IV we demonstrated that to calculate the implicit foundation level we must know the state’s targeting effort, the local share of total funding, the state’s average total funding per weighted pupil, and the equalizing factor. Because the equalization theory underlying the implicit foundation level implies state funding is targeted to poor districts, when we determined the targeting effort for calculating the implicit foundation level, we constrained the coefficient of the tax base variable to be less than or equal to 0. Then, having calculated the state’s targeting effort (that is, the coefficient of the tax base variable, b local share of education funding (a ) and average total funding per weighted pupil (e), we can solve for the equalizing factor (b ). Finally, )) and knowing the knowing the equalizing factor, we can calculate the state’s implicit foundation level using equation IV.12 from appendix IV (reproduced here as equation V.3). The results for each state are reported in table V.8. e = the state’s average total funding per weighted pupil. The implicit foundation level available to all students in a state depends upon the state’s average total funding per weighted pupil, targeting effort, and share of total funding. Two states with the same average total funding per weighted pupil can have very different implicit foundation levels depending on their state equalization policies. For example, Alaska and Connecticut had about the same average funding level. However, Alaska’s state share was about twice that of Connecticut’s. Consequently, Alaska’s implicit foundation level ($6,137) was much more than Connecticut’s ($4,556), even though Connecticut’s targeting effort was greater than Alaska’s effort. Once we know the implicit foundation level, we can calculate the state’s equalization effort. This is a measure of the implicit foundation level as a percent of the state average. Since the state average is the maximum foundation level that is possible in a state given the total funding devoted to education, the equalization effort is a measure of how close a state comes to reaching the maximum level. States’ implicit foundation levels varied widely. The average implicit foundation level was $3,090 per weighted pupil in school year 1991-92, with levels ranging as low as $764 in New Hampshire to as high as $6,137 in Alaska. States’ equalization efforts also varied widely. Only one state—Nevada— reached the state average for each student. The equalization effort in the other 48 states was less than the state average, ranging from 87 percent (Arkansas and Kentucky) to 13 percent (New Hampshire) of their state average. In 14 states, districts could finance less than half the state average with a minimum local tax effort. Table V.8 summarizes the critical data used to determine the implicit foundation level and equalization effort for all states. Table V.8: State Targeting, State Share, and Funding Levels State targeting )) State’s share as a percent of total funding (1-a ) State average funding level (e) Implicit foundation level (e*) Equalization effort (e*/e) (continued) State targeting )) State’s share as a percent of total funding (1-a ) State average funding level (e) Implicit foundation level (e*) Equalization effort (e*/e) (Table notes on next page) Nevada targeted more state funds to poor districts than was necessary for districts to finance the state average with all districts making the same tax effort. As a result, poor districts in Nevada were able to finance the state average funding level with a lower tax effort than wealthy districts. In addition to targeting additional funds to poor districts, some states provided the same minimum amount of state funding to all districts, regardless of district income. Unlike funding for lower income districts, such funding for wealthy districts in some states was not part of the state’s targeting effort because it was not sensitive to district income. Consequently, we also estimated the state implicit foundation level and equalization effort, assuming the goal was to have all students except for the 15 percent of students in the wealthiest districts receive the implicit foundation level. Using this analysis, we found that 16 states had a net increase of 10 percentage points or more in their equalization effort, that is, in the extent to which they achieved the state average. Table V.9 provides the results of this analysis. Equalization effort for all students(continued) Equalization effort for all students(Table notes on next page) To facilitate cross-state comparisons of the implicit foundation levels, we adjusted the implicit foundation levels reported in table V.8 for interstate differences in costs and student needs. We used a teacher cost index available from the National Center for Education Statistics (NCES) for the states to adjust funding data for national differences in cost, and we created a nationwide need index for the states in the same way we created other indexes (see app. II). To compare states, we divided the funding data from a state by the product of the nationwide cost and need indexes of that state. Using this method, we calculated the nationally adjusted implicit foundation level for each state (see table V.10 and fig. V.1). Table V.11 lists the original nationwide teacher cost index we obtained from NCES, an adjusted nationwide index that applies only to teacher costs, and the nationwide need index for each state. Figure V.1: Nationally Adjusted Implicit Foundation Levels (Ranked) Funding per Weighted Pupil (Dollars) Adjusted state teacher cost index(continued) After calculating the state equalization effort, a measure that accounts for the combined effects of state targeting and state share in state equalization policies, we used it together with relative local tax effort to explain cross-state variation in funding gaps. In equation V.4, the dependent variable was the state fiscal neutrality scores reported in table III.1; the two independent variables were the state equalization efforts, reported in table V.8 and the elasticity of local tax effort reported in table III.6. The results of this analysis showed that the two factors accounted for 63 percent of the variation in the funding gaps. The elasticity of local tax effort accounted for more of the variation in funding gaps than did state equalization efforts (see table V.12). Appendixes VII through LV contain profiles for 49 states. Each profile provides the critical data resulting from our analysis of state school finance policies. In addition, each profile provides information in tabular and graphic form on (1) the actual distribution of state and local funding to regular school districts in school year 1991-92 and (2) how the funding would have been distributed if the state share of total funding had remained the same and the targeting of state funding had been changed so that districts could spend the state average of total funding on each student with an average tax effort. All funding data in the profiles were adjusted for differences in geographic cost and student need within the state. The profiles show averages for districts within the state in five groups according to increasing income per pupil based on student population. For example, the poorest group of districts typically contains about 20 percent of a state’s student population and has the lowest incomes per pupil. In the stacked bar graphs (the first two figures in each profile), the height of the bars shows how state funding that has been adjusted for cost and need is equalized among districts. If the state fully equalized funding, all the bars are the same height. To assess the targeting of state funds, examine the shaded area within each bar, which represents the state’s share of total funding. Where state funding was targeted to poor districts, the shaded portion is highest for the poorest districts and becomes smaller as the per pupil income of a district increases. The first figure in each profile shows how total funding per weighted pupil changed as district income per pupil increased. Typically, the local funding increased with increasing per pupil income, often at a faster rate than the decline in state funding. Thus, total funding typically was greatest for the wealthiest districts. The second figure in each profile shows how state and local funding would have been distributed if all districts could have spent the average total funding per weighted pupil (the total funding level is the same across all groups) with an average tax effort. This figure assumes that the state optimized its targeting effort without changing the state share or the total funding for education. The third figure in each profile compares the state funding in the first figure with the state funding in the second figure. The third figure illustrates which groups of districts would have received more or less than what they needed if the state had targeted its funds so that each district could have spent the state average of total funding on each student with an average tax effort. The data used in each of the figures appear in tables in each profile. The numbers in the tables may not add due to rounding. Data used in the profiles were based mainly on the Department of Education’s Common Core of Data (CCD) for school districts for the 1991-92 school year. In some cases, we obtained data directly from state education offices, and we imputed income and cost data for a district when the data were missing from the source. Income per pupil data were adjusted for differences in cost within a state. Funding per pupil data were adjusted for differences in student need and geographic costs within a state. Funding data included all state and local revenue for all purposes, including maintenance and operations, transportation, and capital expenditures and debt service. As table VII.1 shows, in school year 1991-92, the state provided about 70 percent of the total funding to Alabama’s school districts. Total funding (state and local funds combined) per weighted pupil in Alabama averaged $3,277 with an implicit foundation level of $2,287 for each student, which is about 70 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was .000, indicating that state education funds were not targeted to poor or wealthy districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .290, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) An Alabama education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table VII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table VII.3 presents data on how state and local funding was distributed among the five groups of Alabama districts. Alabama’s equalization policies reduced the funding disparity between the poor and wealthy groups from about 93 percent to about 18 percent. Figure VII.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table VII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure VII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure VII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table VIII.1 shows, in school year 1991-92, the state provided about 76 percent of the total funding to Alaska’s school districts. Total funding (state and local funds combined) per weighted pupil in Alaska averaged $8,030 with an implicit foundation level of $6,137 for each student, which is about 76 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was .000, indicating that state education funds were not targeted to poor or wealthy districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was –.272, indicating that total funding increased as district income decreased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table VIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table VIII.3 presents data on how state and local funding was distributed among the five groups of Alaska districts. Alaska’s equalization policies essentially eliminated the funding disparity between the poor and wealthy groups. Figure VIII.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table VIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure VIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure VIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table IX.1 shows, in school year 1991-92, the state provided about 47 percent of the total funding to Arizona’s school districts. Total funding (state and local funds combined) per weighted pupil in Arizona averaged $4,507 with an implicit foundation level of $2,598 for each student, which is about 58 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.232, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .141, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table IX.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table IX.3 presents data on how state and local funding was distributed among the five groups of Arizona districts. Arizona’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 144 percent to about 32 percent. Figure IX.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table IX.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure IX.2 provides information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure IX.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (1,000) (2,000) State Funding Funding per Weighted Pupil (Dollars) (1,000) (2,000) As table X.1 shows, in school year 1991-92, the state provided about 65 percent of the total funding to Arkansas’s school districts. Total funding (state and local funds combined) per weighted pupil in Arkansas averaged $3,784 with an implicit foundation level of $3,289 for each student, which is about 87 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.328, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .220, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table X.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table X.3 presents data on how state and local funding was distributed among the five groups of Arkansas districts. Arkansas’ equalization policies reduced the funding disparity between the wealthy and poor groups from about 111 percent to about 14 percent. Figure X.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table X.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure X.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure X.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XI.1 shows, in school year 1991-92, the state provided about 69 percent of the total funding to California’s school districts. Total funding (state and local funds combined) per weighted pupil in California averaged $4,543 with an implicit foundation level of $3,504 for each student, which is about 77 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.119, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .073, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XI.3 presents data on how state and local funding was distributed among the five groups of California districts. California’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 166 percent to about 13 percent. Figure XI.1 provides table information in graphic form. Appendix XI State Profile: California Funding per Weighted Pupil (Dollars) Table XI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XI State Profile: California Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XII.1 shows, in school year 1991-92, the state provided about 44 percent of the total funding to Colorado’s school districts. Total funding (state and local funds combined) per weighted pupil in Colorado averaged $5,047 with an implicit foundation level of $3,847 for each student, which is about 76 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.753, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .154, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Colorado education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XII.3 presents data on how state and local funding was distributed among the five groups of Colorado districts. Colorado’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 63 percent to about 8 percent. Figure XII.1 provides table information in graphic form. Appendix XII State Profile: Colorado Funding per Weighted Pupil (Dollars) Table XII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XIII.1 shows, in school year 1991-92, the state provided about 39 percent of the total funding to Connecticut’s school districts. Total funding (state and local funds combined) per weighted pupil in Connecticut averaged $8,221 with an implicit foundation level of $4,556 for each student, which is about 55 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.430, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .241, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Connecticut education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XIII.3 presents data on how state and local funding was distributed among the five groups of Connecticut districts. Connecticut’s equalization policies reduced the funding disparity between the wealthy and poor groups from 234 percent to about 34 percent. Figure XIII.1 provides table information in graphic form. Appendix XIII State Profile: Connecticut Funding per Weighted Pupil (Dollars) Table XIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (2,000) State Funding Appendix XIII State Profile: Connecticut Funding per Weighted Pupil (Dollars) (1,000) As table XIV.1 shows, in school year 1991-92, the state provided about 70 percent of the total funding to Delaware’s school districts. Total funding (state and local funds combined) per weighted pupil in Delaware averaged $5,576 with an implicit foundation level of $4,190 for each student, which is about 75 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.070, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .072, indicating that total funding increased as district income increased.(To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XIV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XIV.3 presents data on how state and local funding was distributed among the five groups of Delaware districts. Delaware’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 156 percent to about 9 percent. Figure XIV.1 provides table information in graphic form. Appendix XIV State Profile: Delaware Funding per Weighted Pupil (Dollars) Table XIV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XIV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XIV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XV.1 shows, in school year 1991-92, the state provided 53 percent of the total funding to Florida’s school districts. Total funding (state and local funds combined) per weighted pupil in Florida averaged $5,555 with an implicit foundation level of $4,759 for each student, which is about 86 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.615, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .239, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XV.3 presents data on how state and local funding was distributed among the five groups of Florida districts. Florida’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 181 percent to about 18 percent. Figure XV.1 provides table information in graphic form. Appendix XV State Profile: Florida Funding per Weighted Pupil (Dollars) Table XV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XV State Profile: Florida Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XVI.1 shows, in school year 1991-92, the state provided about 55 percent of the total funding to Georgia’s school districts. Total funding (state and local funds combined) per weighted pupil in Georgia averaged $4,324 with an implicit foundation level of $2,932 for each student, which is about 68 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.242, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .323, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XVI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XVI.3 presents data on how state and local funding was distributed among the five groups of Georgia districts. Georgia’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 189 percent to about 30 percent. Figure XVI.1 provides table information in graphic form. Appendix XVI State Profile: Georgia Funding per Weighted Pupil (Dollars) Table XVI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XVI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XVI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XVI State Profile: Georgia Funding Per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XVII.1 shows, in school year 1991-92, the state provided about 67 percent of the total funding to Idaho’s school districts. Total funding (state and local funds combined) per weighted pupil in Idaho averaged $3,504 with an implicit foundation level of $2,654 for each student, which is about 76 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.130, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .247, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) An Idaho education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XVII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XVII.3 presents data on how state and local funding was distributed among the five groups of Idaho districts. Idaho’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 177 percent to about 26 percent. Figure XVII.1 provides table information in graphic form. Appendix XVII State Profile: Idaho Funding per Weighted Pupil (Dollars) Table XVII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XVII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XVII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XVII State Profile: Idaho Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XVIII.1 shows, in school year 1991-92, the state provided about 33 percent of the total funding to Illinois’ school districts. Total funding (state and local funds combined) per weighted pupil in Illinois averaged $4,970 with an implicit foundation level of $2,031 for each student, which is about 41 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.230, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .338, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XVIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XVIII.3 presents data on how state and local funding was distributed among the five groups of Illinois districts. Illinois’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 215 percent to about 67 percent. Figure XVIII.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table XVIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XVIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XVIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (2,000) (4,000) State Funding Appendix XVIII State Profile: Illinois Funding per Weighted Pupil (Dollars) (1,000) (2,000) (3,000) As table XIX.1 shows, in school year 1991-92, the state provided about 54 percent of the total funding to Indiana’s school districts. Total funding (state and local funds combined) per weighted pupil in Indiana averaged $4,993 with an implicit foundation level of $2,970 for each student, which is about 60 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.099, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .153, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) An Indiana education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XIX.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XIX.3 presents data on how state and local funding was distributed among the five groups of Indiana districts. Indiana’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 40 percent to about 10 percent. Figure XIX.1 provides table information in graphic form. Appendix XIX State Profile: Indiana Funding per Weighted Pupil (Dollars) Table XIX.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XIX.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XIX.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XIX State Profile: Indiana Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XX.1 shows, in school year 1991-92, the state provided 49 percent of the total funding to Iowa’s school districts. Total funding (state and local funds combined) per weighted pupil in Iowa averaged $4,849 with an implicit foundation level of $2,622 for each student, which is about 54 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.104, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .031, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XX.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XX.3 presents data on how state and local funding was distributed among the five groups of Iowa districts. Iowa’s equalization policies increased the funding that poor districts had compared with wealthy districts from 2 percent to about 4 percent. Figure XX.1 provides table information in graphic form. Appendix XX State Profile: Iowa Funding per Weighted Pupil (Dollars) Table XX.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XX.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XX.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XX State Profile: Iowa Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XXI.1 shows, in school year 1991-92, the state provided about 44 percent of the total funding to Kansas’ school districts. Total funding (state and local funds combined) per weighted pupil in Kansas averaged $4,973 with an implicit foundation level of $2,706 for each student, which is about 54 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.241, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .014, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Kansas education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXI.3 presents data on how state and local funding was distributed among the five groups of Kansas districts. Kansas’ equalization policies reduced the funding disparity between the wealthy and poor groups from about 68 percent to about 9 percent. Figure XXI.1 provides table information in graphic form. Appendix XXI State Profile: Kansas Funding per Weighted Pupil (Dollars) Table XXI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XXI State Profile: Kansas Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XXII.1 shows, in school year 1991-92, the state provided 70 percent of the total funding to Kentucky’s school districts. Total funding (state and local funds combined) per weighted pupil in Kentucky averaged $3,728 with an implicit foundation level of $3,232 for each student, which is about 87 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.239, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .126, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XXII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXII.3 presents data on how state and local funding was distributed among the five groups of Kentucky districts. Kentucky’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 283 percent to about 15 percent. Figure XXII.1 provides table information in graphic form. Appendix XXII State Profile: Kentucky Funding per Weighted Pupil (Dollars) Table XXII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XXII State Profile: Kentucky Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XXIII.1 shows, in school year 1991-92, the state provided about 62 percent of the total funding to Louisiana’s school districts. Total funding (state and local funds combined) per weighted pupil in Louisiana averaged $3,912 with an implicit foundation level of $2,433 for each student, which is about 62 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was .000, indicating that state education funds were not targeted to poor or wealthy districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .216, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Louisiana education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXIII.3 presents data on how state and local funding was distributed among the five groups Louisiana districts. Louisiana’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 80 percent to about 21 percent. Figure XXIII.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table XXIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XXIII State Profile: Louisiana Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XXIV.1 shows, in school year 1991-92, the state provided about 50 percent of the total funding to Maine’s school districts. Total funding (state and local funds combined) per weighted pupil in Maine averaged $5,681 with an implicit foundation level of $3,612 for each student, which is about 64 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.287, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .176, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XXIV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXIV.3 presents data on how state and local funding was distributed among the five groups of Maine districts. Maine’s equalization policies reduced the total funding disparity between the wealthy and poor groups from about 100 percent to about 17 percent. Figure XXIV.1 provides table information in graphic form. Appendix XXIV State Profile: Maine Funding per Weighted Pupil (Dollars) Table XXIV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXIV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXIV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XXIV State Profile: Maine Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XXV.1 shows, in school year 1991-92, the state provided about 40 percent of the total funding to Maryland’s school districts. Total funding (state and local funds combined) per weighted pupil in Maryland averaged $6,039 with an implicit foundation level of $3,819 for each student, which is about 63 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.566, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .469, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Maryland education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXV.3 presents data on how state and local funding was distributed among the five groups of Maryland districts. Maryland’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 217 percent to about 65 percent. Figure XXV.1 provides table information in graphic form. Appendix XXV State Profile: Maryland Funding per Weighted Pupil (Dollars) Table XXV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XXVI.1 shows, in school year 1991-92, the state provided about 31 percent of the total funding to Massachusetts’ school districts. Total funding (state and local funds combined) per weighted pupil in Massachusetts averaged $6,264 with an implicit foundation level of $2,542 for each student, which is about 41 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.316, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .447, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Massachusetts education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXVI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXVI.3 presents data on how state and local funding was distributed among the five groups of Massachusetts districts. Massachusetts’ equalization policies reduced the funding disparity between the wealthy and poor groups from about 228 percent to about 54 percent. Figure XXVI.1 provides table information in graphic form. Appendix XXVI State Profile: Massachusetts Funding per Weighted Pupil (Dollars) Table XXVI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXVI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXVI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (2,000) State Funding Appendix XXVI State Profile: Massachusetts Funding per Weighted Pupil (Dollars) (1,000) As table XXVII.1 shows, in school year 1991-92, the state provided about 33 percent of the total funding to Michigan’s school districts. Total funding (state and local funds combined) per weighted pupil in Michigan averaged $5,851 with an implicit foundation level of $2,839 for each student, which is about 49 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.475, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .290, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Michigan education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXVII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXVII.3 presents data on how state and local funding was distributed among the five groups of Michigan districts. Michigan’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 208 percent to about 36 percent. Figure XXVII.1 provides table information in graphic form. Appendix XXVII State Profile: Michigan Funding per Weighted Pupil (Dollars) Table XXVII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXVII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXVII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (2,000) State Funding Appendix XXVII State Profile: Michigan Funding per Weighted Pupil (Dollars) (1,000) As table XXVIII.1 shows, in school year 1991-92, the state provided about 54 percent of the total funding to Minnesota’s school districts. Total funding (state and local funds combined) per weighted pupil in Minnesota averaged $5,646 with an implicit foundation level of $4,524 for each student, which is about 80 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.499, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .113, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Minnesota education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXVIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXVIII.3 presents data on how state and local funding was distributed among the five groups of Minnesota districts. Minnesota’s equalization policies reduced the funding disparity between the wealthy and poor groups from 133 percent to about 11 percent. Figure XXVIII.1 provides table information in graphic form. Appendix XXVIII State Profile: Minnesota Funding per Weighted Pupil (Dollars) Table XXVIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXVIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXVIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) State Funding Appendix XXVIII State Profile: Minnesota Funding per Weighted Pupil (Dollars) As table XXIX.1 shows, in school year 1991-92, the state provided about 64 percent of the total funding to Mississippi’s school districts. Total funding (state and local funds combined) per weighted pupil in Mississippi averaged $2,831 with an implicit foundation level of $1,860 for each student, which is about 66 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.020, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .007, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Mississippi education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXIX.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXIX.3 presents data on how state and local funding was distributed among the five groups of Mississippi districts. Mississippi’s equalization policies eliminated the funding disparity between the wealthy and poor groups, with poor districts receiving about 2 percent more total funding than wealthy districts. Figure XXIX.1 provides table information in graphic form. Appendix XXIX State Profile: Mississippi Funding per Weighted Pupil (Dollars) Table XXIX.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXIX.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXIX.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XXIX State Profile: Mississippi Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XXX.1 shows, in school year 1991-92, the state provided about 45 percent of the total funding to Missouri’s school districts. Total funding (state and local funds combined) per weighted pupil in Missouri averaged $3,972 with an implicit foundation level of $1,802 for each student, which is about 45 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.017, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .362, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Missouri education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXX.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXX.3 presents data on how state and local funding was distributed among the five groups of Missouri districts. Missouri’s equalization policies reduced the funding disparity between the wealthy and poor groups from 181 percent to about 70 percent. Figure XXX.1 provides table information in graphic form. Appendix XXX State Profile: Missouri Funding per Weighted Pupil (Dollars) Table XXX.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXX.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXX.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XXX State Profile: Missouri Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XXXI.1 shows, in school year 1991-92, the state provided about 44 percent of the total funding to Montana’s school districts. Total funding (state and local funds combined) per weighted pupil in Montana averaged $4,835 with an implicit foundation level of $2,406 for each student, which is about 50 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.126, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .393, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Montana education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXXI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXI.3 presents data on how state and local funding was distributed among the five groups of Montana districts. Although Montana provided more state funding to wealthy districts than to poor districts, Montana’s equalization policies moderated the funding disparity between the wealthy and poor groups from about 104 percent to about 73 percent. Figure XXXI.1 provides table information in graphic form. Appendix XXXI State Profile: Montana Funding per Weighted Pupil (Dollars) Table XXXI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (1,000) State Funding Appendix XXXI State Profile: Montana Funding per Weighted Pupil (Dollars) (1,000) As table XXXII.1 shows, in school year 1991-92, the state provided about 34 percent of the total funding to Nebraska’s school districts. Total funding (state and local funds combined) per weighted pupil in Nebraska averaged $5,148 with an implicit foundation level of $2,203 for each student, which is about 43 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.246, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .154, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Nebraska education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXXII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXII.3 presents data on how state and local funding was distributed among the five groups of Nebraska districts. Nebraska’s equalization policies reduced the total funding disparity between the wealthy and poor groups from about 27 percent to about 5 percent. Figure XXXII.1 provides table information in graphic form. Appendix XXXII State Profile: Nebraska Funding per Weighted Pupil (Dollars) Table XXXII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) State Funding Appendix XXXII State Profile: Nebraska Funding per Weighted Pupil (Dollars) As table XXXIII.1 shows, in school year 1991-92, the state provided about 57 percent of the total funding to Nevada’s school districts. Total funding (state and local funds combined) per weighted pupil in Nevada averaged $3,597 with the same implicit foundation level, achieving an equalization effort of 100 percent. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –1.007, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was –.556, indicating that total funding increased as district income decreased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XXXIII.2 presents demographic data for 1991-92 for four groups of districts of increasing district income. Nevada was divided into four groups rather than five because of its student population distribution. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXIII.3 presents data on how state and local funding was distributed among the four groups of districts. Nevada’s equalization policies increased the funding that poor districts had compared with wealthy districts, resulting in wealthy districts having 31 percent less funding than poor districts. Figure XXXIII.1 provides table information in graphic form. Appendix XXXIII State Profile: Nevada Funding per Weighted Pupil (Dollars) Table XXXIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) State Funding Appendix XXXIII State Profile: Nevada Funding per Weighted Pupil (Dollars) As table XXXIV.1 shows, in school year 1991-92, the state provided about 8 percent of the total funding to New Hampshire’s school districts. Total funding (state and local funds combined) per weighted pupil in New Hampshire averaged $5,850 with an implicit foundation level of $764 for each student, which is about 13 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.571, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .238, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XXXIV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXIV.3 presents data on how state and local funding was distributed among the five groups of New Hampshire districts. New Hampshire’s equalization policies reduced the funding disparity between the wealthy and poor groups from 48 percent to about 30 percent. Figure XXXIV.1 provides table information in graphic form. Appendix XXXIV State Profile: New Hampshire Funding per Weighted Pupil (Dollars) Table XXXIV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXIV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXIV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (2,000) (4,000) State Funding Appendix XXXIV State Profile: New Hampshire Funding per Weighted Pupil (Dollars) (1,000) (2,000) (3,000) As table XXXV.1 shows, in school year 1991-92, the state provided about 43 percent of the total funding to New Jersey’s school districts. Total funding (state and local funds combined) per weighted pupil in New Jersey averaged $9,239 with an implicit foundation level of $4,399 for each student, which is about 48 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.104, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .168, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A New Jersey education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXXV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXV.3 presents data on how state and local funding was distributed among the five groups of New Jersey districts. New Jersey’s equalization policies reduced the funding disparity between the wealthy and poor groups from 247 percent to about 31 percent. Figure XXXV.1 provides table information in graphic form. Appendix XXXV State Profile: New Jersey Funding per Weighted Pupil (Dollars) Table XXXV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (2,000) State Funding Appendix XXXV State Profile: New Jersey Funding per Weighted Pupil (Dollars) (2,000) As table XXXVI.1 shows, in school year 1991-92, the state provided 85 percent of the total funding to New Mexico’s school districts. Total funding (state and local funds combined) per weighted pupil in New Mexico averaged $3,830 with an implicit foundation level of $3,254 for each student, which is 85 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was .000, indicating that state education funds were not targeted to poor or wealthy districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .004, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XXXVI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXVI.3 presents data on how state and local funding was distributed among the five groups of New Mexico districts. New Mexico’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 33 percent to about 5 percent. Figure XXXVI.1 provides table information in graphic form. Appendix XXXVI State Profile: New Mexico Funding per Weighted Pupil (Dollars) Table XXXVI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXVI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXVI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) State Funding Appendix XXXVI State Profile: New Mexico Funding per Weighted Pupil (Dollars) As table XXXVII.1 shows, in school year 1991-92, the state provided about 43 percent of the total funding to New York’s school districts. Total funding (state and local funds combined) per weighted pupil in New York averaged $7,787 with an implicit foundation level of $5,240 for each student, which is about 67 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.578, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .370, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XXXVII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXVII.3 presents data on how state and local funding was distributed among the five groups of New York districts. New York’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 189 percent to 32 percent. Figure XXXVII.1 provides table information in graphic form. Appendix XXXVII State Profile: New York Funding per Weighted Pupil (Dollars) Table XXXVII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXVII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXVII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) State Funding Appendix XXXVII State Profile: New York Funding per Weighted Pupil (Dollars) As table XXXVIII.1 shows, in school year 1991-92, the state provided about 68 percent of the total funding to North Carolina’s school districts. Total funding (state and local funds combined) per weighted pupil in North Carolina averaged $4,424 with an implicit foundation level of $3,043 for each student, which is about 69 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.016, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) A North Carolina education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). The fiscal neutrality score was .250, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XXXVIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXVIII.3 presents data on how state and local funding was distributed among the five groups of North Carolina districts. North Carolina’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 110 percent to about 18 percent. Figure XXXVIII.1 provides table information in graphic form. Appendix XXXVIII State Profile: North Carolina Funding per Weighted Pupil (Dollars) Table XXXVIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXVIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXVIII.3. Funding of wealthiest group compared with poorest group(percent) This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) State Funding Appendix XXXVIII State Profile: North Carolina Funding per Weighted Pupil (Dollars) As table XXXIX.1 shows, in school year 1991-92, the state provided 48 percent of the total funding to North Dakota’s school districts. Total funding (state and local funds combined) per weighted pupil in North Dakota averaged $4,079 with an implicit foundation level of $1,957 for each student, which is 48 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was .000, indicating that state education funds were not targeted to poor or wealthy districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .236, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A North Dakota education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XXXIX.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XXXIX.3 presents data on how state and local funding was distributed among the five groups of North Dakota districts. North Dakota’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 48 percent to about 18 percent. Figure XXXIX.1 provides table information in graphic form. Appendix XXXIX State Profile: North Dakota Funding per Weighted Pupil (Dollars) Table XXXIX.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XXXIX.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XXXIX.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) State Funding Appendix XXXIX State Profile: North Dakota Funding per Weighted Pupil (Dollars) As table XL.1 shows, in school year 1991-92, the state provided about 42 percent of the total funding to Ohio’s school districts. Total funding (state and local funds combined) per weighted pupil in Ohio averaged $4,709 with an implicit foundation level of $2,325 for each student, which is about 49 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.180, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .315, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) An Ohio education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XL.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XL.3 presents data on how state and local funding was distributed among the five groups of Ohio districts. Ohio’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 110 to 32 percent. Figure XL.1 provides table information in graphic form. Appendix XL State Profile: Ohio Funding per Weighted Pupil (Dollars) Table XL.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XL.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XL.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XL State Profile: Ohio Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XLI.1 shows, in school year 1991-92, the state provided about 71 percent of the total funding to Oklahoma’s school districts. Total funding (state and local funds combined) per weighted pupil in Oklahoma averaged $3,623 with an implicit foundation level of $2,838 for each student, which is about 78 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.102, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was –.053, indicating that total funding increased as district income decreased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XLI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLI.3 presents data on how state and local funding was distributed among the five groups of Oklahoma districts. Oklahoma’s equalization policies eliminated the 69 percent funding disparity between the wealthy and poor groups, resulting in poor districts having about 6 percent more funding than wealthy districts. Figure XLI.1 provides table information in graphic form. Appendix XLI State Profile: Oklahoma Funding per Weighted Pupil (Dollars) Table XLI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XLI State Profile: Oklahoma Funding per Weighted Pupil (Dollars) Funding Per Weighted Pupil (Dollars) As table XLII.1 shows, in school year 1991-92, the state provided about 31 percent of the total funding to Oregon’s school districts. Total funding (state and local funds combined) per weighted pupil in Oregon averaged $5,087 with an implicit foundation level of $1,652 for each student, which is about 33 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.043, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .166, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) An Oregon education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XLII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLII.3 presents data on how state and local funding was distributed among the five groups of Oregon districts. Oregon’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 46 to 22 percent. Figure XLII.1 provides table information in graphic form. Appendix XLII State Profile: Oregon Funding per Weighted Pupil (Dollars) Table XLII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (1,000) State Funding Appendix XLII State Profile: Oregon Funding per Weighted Pupil (Dollars) (1,000) As table XLIII.1 shows, in school year 1991-92, the state provided 43 percent of the total funding to Pennsylvania’s school districts. Total funding (state and local funds combined) per weighted pupil in Pennsylvania averaged $6,406 with an implicit foundation level of $3,455 for each student, which is about 54 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.255, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .300, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XLIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLIII.3 presents data on how state and local funding was distributed among the five groups of Pennsylvania districts. Pennsylvania’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 142 to 32 percent. Figure XLIII.1 provides table information in graphic form. Appendix XLIII State Profile: Pennsylvania Funding per Weighted Pupil (Dollars) Table XLIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XLIII State Profile: Pennsylvania Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XLIV.1 shows, in school year 1991-92, the state provided about 39 percent of the total funding to Rhode Island’s school districts. Total funding (state and local funds combined) per weighted pupil in Rhode Island averaged $5,939 with an implicit foundation level of $3,953 for each student, which is about 67 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.694, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .274, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Rhode Island education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XLIV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLIV.3 presents data on how state and local funding was distributed among the five groups of Rhode Island districts. Rhode Island’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 85 to 19 percent. Figure XLIV.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table XLIV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLIV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLIV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XLIV State Profile: Rhode Island Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XLV.1 shows, in school year 1991-92, the state provided about 52 percent of the total funding to South Carolina’s school districts. Total funding (state and local funds combined) per weighted pupil in South Carolina averaged $4,112 with an implicit foundation level of $3,239 for each student, which is about 79 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.505, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .150, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XLV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLV.3 presents data on how state and local funding was distributed among the five groups of South Carolina districts. South Carolina’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 69 to 8 percent. Figure XLV.1 provides table information in graphic form. Appendix XLV State Profile: South Carolina Funding per Weighted Pupil (Dollars) Table XLV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XLV State Profile: South Carolina Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XLVI.1 shows, in school year 1991-92, the state provided about 30 percent of the total funding to South Dakota’s school districts. Total funding (state and local funds combined) per weighted pupil in South Dakota averaged $3,756 with an implicit foundation level of $1,109 for each student, which is about 30 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was .000, indicating that state education funds were not targeted to poor or wealthy districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .367, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table XLVI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLVI.3 presents data on how state and local funding was distributed among the five groups of South Dakota districts. South Dakota’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 66 to 28 percent. Figure XLVI.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table XLVI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLVI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLVI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XLVI State Profile: South Dakota Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XLVII.1 shows, in school year 1991-92, the state provided 47 percent of the total funding to Tennessee’s school districts. Total funding (state and local funds combined) per weighted pupil in Tennessee averaged $3,329 with an implicit foundation level of $1,566 for each student, which is 47 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was .000, indicating that state education funds were not targeted to poor or wealthy districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .242, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Tennessee education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XLVII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLVII.3 presents data on how state and local funding was distributed among the five groups of Tennessee districts. Tennessee’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 59 to 21 percent. Figure XLVII.1 provides table information in graphic form. Appendix XLVII State Profile: Tennessee Funding per Weighted Pupil (Dollars) Table XLVII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLVII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLVII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XLVII State Profile: Tennessee Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XLVIII.1 shows, in school year 1991-92, the state provided about 47 percent of the total funding to Texas’ school districts. Total funding (state and local funds combined) per weighted pupil in Texas averaged $4,603 with an implicit foundation level of $3,318 for each student, which is about 72 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.522, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .003, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Texas education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XLVIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLVIII.3 presents data on how state and local funding was distributed among the five groups of Texas districts. Texas’ equalization policies eliminated the funding disparity between the wealthy and poor groups. Figure XLVIII.1 provides table information in graphic form. Appendix XLVIII State Profile: Texas Funding per Weighted Pupil (Dollars) Table XLVIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLVIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLVIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table XLIX.1 shows, in school year 1991-92, the state provided about 60 percent of the total funding to Utah’s school districts. Total funding (state and local funds combined) per weighted pupil in Utah averaged $3,177 with an implicit foundation level of $2,240 for each student, which is about 71 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.172, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .036, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) A Utah education official reported that the state had changed its school finance system since school year 1991-92 to increase funding to poor districts compared with wealthy districts (see app. LVI). To put the state’s school finance system in perspective, table XLIX.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table XLIX.3 presents data on how state and local funding was distributed among the five groups of Utah districts. Utah’s equalization policies eliminated the funding disparity between the wealthy and poor groups, resulting in poor districts having 1 percent more funding than wealthy districts. Figure XLIX.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table XLIX.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure XLIX.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure XLIX.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix XLIX State Profile: Utah Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table L.1 shows, in school year 1991-92, the state provided 29 percent of the total funding to Vermont’s school districts. Total funding (state and local funds combined) per weighted pupil in Vermont averaged $7,722 with an implicit foundation level of $3,453 for each student, which is about 45 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.539, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .176, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table L.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table L.3 presents data on how state and local funding was distributed among the five groups of Vermont districts. Vermont’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 91 to 31 percent. Figure L.1 provides table information in graphic form. Funding per Weighted Pupil (Dollars) Table L.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure L.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure L.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (2,000) State Funding Funding per Weighted Pupil (Dollars) (2,000) As table LI.1 shows, in school year 1991-92, the state provided 36 percent of the total funding to Virginia’s school districts. Total funding (state and local funds combined) per weighted pupil in Virginia averaged $4,713 with an implicit foundation level of $2,541 for each student, which is about 54 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.499, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .377, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table LI.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table LI.3 presents data on how state and local funding was distributed among the five groups of Virginia districts. Virginia’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 168 to 38 percent. Figure LI.1 provides table information in graphic form. Appendix LI State Profile: Virginia Funding per Weighted Pupil (Dollars) Table LI.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure LI.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure LI.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Funding per Weighted Pupil (Dollars) (1,000) State Funding Appendix LI State Profile: Virginia Funding per Weighted Pupil (Dollars) (500) As table LII.1 shows, in school year 1991-92, the state provided about 75 percent of the total funding to Washington’s school districts. Total funding (state and local funds combined) per weighted pupil in Washington averaged $5,302 with an implicit foundation level of $4,025 for each student, which is about 76 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.009, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .055, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table LII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table LII.3 presents data on how state and local funding was distributed among the five groups of Washington districts. Washington’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 99 to 4 percent. Figure LII.1 provides table information in graphic form. Appendix LII State Profile: Washington Funding per Weighted Pupil (Dollars) Table LII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure LII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure LII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix LII State Profile: Washington Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table LIII.1 shows, in school year 1991-92, the state provided about 73 percent of the total funding to West Virginia’s school districts. Total funding (state and local funds combined) per weighted pupil in West Virginia averaged $4,927 with an implicit foundation level of $4,028 for each student, which is about 82 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.127, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .071, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table LIII.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table LIII.3 presents data on how state and local funding was distributed among the five groups of West Virginia districts. West Virginia’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 70 to 4 percent. Figure LIII.1 provides table information in graphic form. Appendix LIII State Profile: West Virginia Funding per Weighted Pupil (Dollars) Table LIII.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure LIII.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure LIII.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix LIII State Profile: West Virginia Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table LIV.1 shows, in school year 1991-92, the state provided about 46 percent of the total funding to Wisconsin’s school districts. Total funding (state and local funds combined) per weighted pupil in Wisconsin averaged $5,865 with an implicit foundation level of $3,439 for each student, which is about 59 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was –.270, indicating that state education funds were targeted to poor districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was .129, indicating that total funding increased as district income increased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table LIV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table LIV.3 presents data on how state and local funding was distributed among the five groups of Wisconsin districts. Wisconsin’s equalization policies reduced the funding disparity between the wealthy and poor groups from about 74 to about 8 percent. Figure LIV.1 provides table information in graphic form. Appendix LIV State Profile: Wisconsin Funding per Weighted Pupil (Dollars) Table LIV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure LIV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure LIV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix LIV State Profile: Wisconsin Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) As table LV.1 shows, in school year 1991-92, the state provided about 53 percent of the total funding to Wyoming’s school districts. Total funding (state and local funds combined) per weighted pupil in Wyoming averaged $5,920 with an implicit foundation level of $3,111 for each student, which is about 53 percent of the average and represents the state’s equalization effort. (To compare this effort with those of other states, see fig. 5.) The targeting score for state funding was .000, indicating that state education funds were not targeted to poor or wealthy districts. (To compare this score with those of other states, see table V.1 in app. V.) The fiscal neutrality score was –.196, indicating that total funding increased as district income decreased. (To compare this score with those of other states, see fig. 1.) To put the state’s school finance system in perspective, table LV.2 presents demographic data for school year 1991-92 for five groups of districts of increasing district income. Average total funding per weighted pupilState share of total funding (percent) Targeting score (state funds) Local funding raised for every $1,000 of district income. Table LV.3 presents data on how state and local funding was distributed among the five groups of Wyoming districts. Wyoming’s equalization policies resulted in wealthy districts having 16 percent less funding than poor districts. Figure LV.1 provides table information in graphic form. Appendix LV State Profile: Wyoming Funding per Weighted Pupil (Dollars) Table LV.4 provides data on the distribution of state and local funding if all districts could have spent the average total funding per weighted pupil with an average tax effort. This assumes the state optimized its targeting effort without changing the state share or the total funding for education. Under this scenario, the implicit foundation level equals the maximum possible foundation level (the state average). Figure LV.2 provides this information in graphic form. The difference between how state funding was actually distributed and how it would have been distributed if districts could have financed the average is shown in figure LV.3. This is the local funding that could have been raised assuming all districts had made the same average tax effort. The average is the maximum foundation level possible in a state. Appendix LV State Profile: Wyoming Funding per Weighted Pupil (Dollars) Funding per Weighted Pupil (Dollars) In this report, we relied on state and local funding data from the 1991-92 school year. However, many states have made subsequent changes to their school finance system in response to legal changes or to concerns about equity. We telephoned officials in the 49 states to determine what changes had been implemented in the school finance system from school years 1991-92 through 1995-96. We specifically asked about changes in targeting that would affect low-wealth districts and changes in a state’s share of total funding. These two factors affect the implicit foundation level that all districts in a state can finance with the same minimum tax effort—the greater the targeting effort to low-wealth districts or the greater the state share, or both, the greater the implicit foundation level. Education officials in over half the states (25) said their state had not increased the targeting of state funds to low-wealth districts since school year 1991-92. Officials in the other 24 states reported that their state was targeting more or many more state funds to low-wealth districts. We did not verify the statements of the state officials. Fewer states had increased the state share of total funding significantly. Officials in eight states reported an increase of 6 percentage points or more in the state share. Officials in 38 states reported that their state’s share of total funding had a net increase or decrease of 5 percentage points or less, and 3 states reported a decrease of 6 percentage points or more. their local property taxes to maintain former spending levels. Table LVI.1 summarizes our findings of the changes states have made. Change in state share (percentage points) (continued) Change as of school year 1993-94. Change as of school year 1994-95. The percent change in one variable relative to a 1-percent change in another variable. In the context of this report, a state’s effort to compensate for differences in districts’ abilities to raise education revenues. The ratio of a state’s implicit foundation level to the maximum foundation level (the state average). Equity in school finances is concerned with the distribution of education funding or resources. To determine the equity of school finance systems, experts recommend considering the following four issues: (1) who is to benefit (taxpayers or public school students); (2) what objects are to be equally distributed, such as revenues or key resources (for example, curriculum and instruction), or outcomes (for example, student achievement); (3) what principle is to be used for determining whether distribution is equitable (such as vertical equity or fiscal neutrality); and (4) the statistic used to measure the degree of equity. A definition of equity that asserts that no relationship should exist between educational spending per pupil and local district income per pupil (or some other measure of fiscal capacity). In this study, a fiscal neutrality score of 0 indicates that no relationship exists between district funding and district income. The elasticity of total (state and local) funding relative to district income. The minimum amount of total funding per weighted pupil that a state’s equalization policies implicitly enable districts to spend with the same minimum local tax effort. The average amount of total funding per weighted pupil in a state. In this study, the tax effort is a ratio of a district’s local education revenue to its income. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed funding gaps between poor and wealthy school districts, focusing on the: (1) size of the gap in total (state and local combined) funding between poor and wealthy school districts for each state; (2) key factors that affect the size of states' funding gaps; (3) effect of states' school finance policies on the funding gap; and (4) implications of this information for state policies. GAO found that: (1) although most states pursued strategies to supplement the local funding of poor school districts, wealthier districts in 37 states had more total (state and local combined) funding than poor districts in the 1991-92 school year; (2) this disparity existed even after adjusting for differences in geographic and student need-related education costs; (3) on average, wealthy districts had about 24 percent more total funding per weighted pupil than poor districts; (4) three factors affected the funding gap between a state's poor and wealthy districts; (5) first, the extent to which a state targeted funding to poor districts affected the funding gap; (6) although targeting efforts typically reduced funding gaps, they did not eliminate them; (7) second, a state's share of total funding can reduce the funding gap, even when the targeting effort is low; (8) finally, the local tax effort to raise education funding affected the funding gap; (9) at the local level, the greater the tax effort that poor districts were willing to make compared with wealthy districts, the smaller the gap in funding between these two types of districts; (10) poor districts in 35 states made a greater tax effort than wealthy districts; (11) because all three of these factors can affect the funding gap, analyzing the effects of state school finance policies required excluding the effects of the local tax effort; (12) to do this, GAO estimated the foundation level each state's school finance policies implicitly supported, which estimates the minimum total funding per pupil that districts could finance if they were to make the same local tax effort; (13) GAO's resulting analysis showed wide variations in the implicit foundation level that state school finance policies supported in school year 1991-92; (14) the implicit foundation levels of almost all states were less than their state average funding levels; (15) in 14 states, the implicit foundation level was less than half the state average funding level; (16) although the relative tax effort of poor and wealthy districts greatly affects the funding gaps between these districts, higher implicit foundation levels can help reduce the gaps; (17) therefore, states can further reduce the funding gaps by increasing their targeting effort to poor districts, increasing the state share of total funding, or increasing both; and (18) officials in a number of states reported making such changes between school years 1991-92 and 1995-96, although 25 states reported making little or no changes in their targeting of poor districts or state share.
After five years of receiving an unqualified opinion on its financial statements, on February 22, 2002, NASA’s new independent auditordisclaimed an opinion on the agency’s fiscal year 2001 financial statements. Specifically, the audit report states that NASA was unable to provide the detailed support needed to determine the accuracy of the agency’s reported obligations, expenses, property, plant, and equipment, and materials for fiscal year 2001. According to the report, each of NASA’s 10 centers uses a different financial management system—each of which has multiple feeder systems that summarize individual transactions on a daily or monthly basis. Financial information from the centers may be summarized more than once before it is uploaded into NASA’s General Ledger Accounts System (GLAS). The successive summarization of data through the various systems impedes NASA’s ability to maintain an audit trail through the summary data to the detailed transaction-level source documentation. Current OMB and GAO guidance on internal control requires agencies to maintain transaction-level documentation and to make the transaction-level documentation readily available for review. NASA was unable to provide sufficient transaction-level documentation to support certain obligation and expense transactions and certain transaction-level cost allocations that the auditors had selected for testing. In addition, the fiscal year 2001 audit report identifies a number of significant internal control weaknesses related to accounting for space station material and equipment and to computer security. The report also states that NASA’s financial management systems do not substantially comply with federal financial management systems requirements and applicable federal accounting standards. While the fiscal year 2001 auditor’s report draws attention to the issue, NASA’s financial management difficulties are not new. The weaknesses discussed in the auditor’s report are consistent with the findings discussed in our previous reports. We have reported on NASA’s contract management problems, misstatement of its Statement of Budgetary Resources, lack of detailed support for amounts reported against certain cost limits, and lack of historical cost data for accurately projecting future cost. We first identified NASA’s contract management as an area at high risk in 1990 because of vulnerabilities to waste, fraud, abuse, and mismanagement. Specifically, we found that NASA lacked effective systems and processes for overseeing contractor activities and did not emphasize controlling costs. While NASA has made progress in managing many of its procurement practices, little progress has been made in correcting the financial system deficiencies that prevent NASA from effectively managing and overseeing its procurement dollars. As a result, contract management remains an area of high risk. The agency’s financial management systems environment is much the same as it was in 1993, the last time we performed comprehensive audit work in that area. It is comprised of decentralized, nonintegrated systems with policies, procedures, and practices that are unique to each of its 10 centers. For the most part, data formats are not standardized, automated systems are not interfaced, and on-line financial information is not readily available to program managers. As a result, NASA cannot ensure that contracts are being efficiently and effectively implemented and budgets are executed as planned. NASA’s long-standing problems in developing and implementing integrated financial management systems contributed to a $644 million misstatement in NASA’s fiscal year 1999 Statement of Budgetary Resources (SBR), which we discussed in our March 2001 report. This error was not detected by NASA Chief Financial Officer (CFO) personnel or by its auditor, Arthur Andersen. Instead, the House Committee on Science discovered the discrepancy in comparing certain line items in the NASA SBR to related figures in the President’s Budget. NASA used an ad hoc process involving a computer spreadsheet to gather the information needed for certain SBR line items because the needed data were not captured by NASA’s general ledger systems. Because each of NASA’s 10 reporting units maintained different accounting systems, none of which were designed to meet FFMIA requirements, it was left up to the units to determine how best to gather the requested data. This cumbersome, time-consuming process ultimately contributed to the misstatement of NASA’s SBR. The SBR is intended to provide information on an agency’s use of budgetary resources provided by the Congress. If reliable, the SBR can provide valuable information for management and oversight purposes to assess the obligations related to prior-year agency activities and to make decisions about future funding. Based on this work, we questioned NASA management’s and its auditor’s determination that NASA’s systems were in substantial compliance with the requirements of FFMIA. As I mentioned earlier, and it bears repeating, FFMIA builds on previous financial management reform legislation by emphasizing the need for agencies to have systems that can generate timely, accurate, and useful information with which to make informed decisions and to ensure accountability on an ongoing basis. This is really the end goal of financial management reforms. In particular, we questioned whether NASA complied with the federal financial management systems requirements for using integrated financial management systems. NASA’s financial management problems were also highlighted in our effort to verify amounts NASA reported to the Congress against legislatively imposed spending limits on its International Space Station and Space Shuttle programs. Since NASA began the current program to build the space station, the program has been characterized by a series of schedule delays, reduction in space station content and capabilities, and a substantial development cost overrun. In February 2001, NASA revealed that the program faced a $4 billion cost overrun that would raise the cost of constructing the space station to $28 billion to $30 billion, 61 percent to 72 percent above the original 1993 estimate. In part to address concerns regarding the escalating space station costs, section 202 of the National Aeronautics and Space Administration Authorization Act for Fiscal Year 2000 (P.L. 106-391), establishes general cost limitations on the International Space Station and Space Shuttle programs. The act requires that NASA, as part of its annual budget request, update the Congress on its progress by (1) accounting for and reporting amounts obligated against the limitations to date, (2) identifying the amount of budget authority requested for the future development and completion of the space station, and (3) arranging for the General Accounting Office to verify the accounting submitted to the Congress It was our intention to verify NASA’s accounting for the space station and shuttle limits by testing the propriety of charges to various agency programs to ensure that all obligations charged to the space station and shuttle programs were appropriate and that no space station or shuttle obligations were wrongly charged to other programs. However, NASA was unable to provide the detailed obligation data needed to support amounts reported to the Congress against the space station and shuttle program cost limits. NASA’s inability to provide detailed data for amounts obligated against the limits is again due to its lack of a modern, integrated financial management system. As I mentioned earlier, NASA’s 10 centers operate with decentralized, nonintegrated systems and with policies, procedures, and practices that are unique to each center. Consequently, the systems have differing capabilities with respect to providing detailed obligation data. According to NASA officials, only 5 of its 10 centers are able to provide complete, detailed support for amounts obligated during fiscal years 1994 though 2001—the period in which NASA incurred obligations related to the limits. In fact, at one center, detailed obligation data are not available for even current-year obligations. As part of our effort to verify NASA accounting for the space station and shuttle cost limits, we also found that NASA was not able to provide support for the actual cost of completed space station components— either in total or by subsystems or elements. For example, NASA cannot identify the actual costs of individual space station components such as Unity (Node 1) or Destiny (U.S. Lab). Although in its audited fiscal year 2000 financial statements, NASA capitalized the cost of Unity, Destiny, and other items in orbit or awaiting launch at about $8 billion, according to NASA officials, these amounts are based primarily on cost estimates, not actual costs. NASA officials stated that its accounting systems were designed prior to the implementation of current federal cost accounting standards and financial systems standards that require agencies to track and maintain cost data needed for management activities, such as estimating and controlling costs, performance measurement, and making economic trade- off decisions. As a result, NASA’s systems do not track the cost of individual space station subsystems or elements. According to NASA officials, the agency manages and tracks space station costs by contract and does not need to know the cost of individual subsystems or elements to effectively manage the program. To the contrary, we found that NASA estimates potential and probable future program costs to determine the impact of canceling, deferring, or adding space station content. These cost estimates often identify the cost of specific space station subsystems. However, because NASA does not attempt to track costs by element or subsystems, the agency does not know the actual cost of completed space station components and is not able to reexamine its cost estimates for validity once costs have been realized. We continue to believe that NASA needs to collect, maintain, and report the full cost of individual subsystems and hardware so that NASA can make valid comparisons between estimates and final costs and so that the Congress can hold NASA accountable for differences between budgeted and actual costs. Modernizing NASA’s financial management system is essential to providing timely, relevant, and reliable information needed to manage cost, measure performance, make program-funding decisions, and analyze outsourcing or privatization options. However, technology alone will not solve NASA’s financial management problems. The key to transforming NASA’s financial management organization into a customer-focused partner in program results hinges on the sustained leadership of NASA’s top executives. As we found in our study of leading private sector and state organizations, clear, strong executive leadership—combined with factors such as effective organizational alignment, strategic human capital management, and end-to-end business process improvement—will be critical for ensuring that NASA’s financial management organization delivers the kind of analysis and forward-looking information needed to effectively manage NASA’s many complex space programs. Specifically, as discussed in the executive guide, to reap the full benefit of a modern, integrated financial management system, NASA must go beyond obtaining an unqualified audit opinion toward (1) routinely generating reliable cost and performance information and analysis, (2) undertaking other value- added activities that support strategic decision-making and mission performance, and (3) building a finance team that supports the agency’s mission and goals. An independent task force created by NASA to review and assess space station costs, budget, and management reached a similar conclusion. In its November 1, 2001, report the International Space Station (ISS) Management and Cost Evaluation (IMCE) Task Force found that the space station program office does not collect the historical cost data needed to accurately project future costs and thus perform major program-level financial forecasting and strategic planning. The task force also reported that NASA’s ability to forecast and plan is weakened by diverse and often incompatible center level accounting systems and uneven and non- standard cost reporting capabilities. The IMCE also concluded that the current weaknesses in financial reporting are a symptom, not a cause, of the problem and that enhanced reporting capabilities, by way of a new integrated financial management system, will not thoroughly solve the problem. The root of the problem, according to the task force, is that finance is not viewed as intrinsic to NASA’s program management decision process. The taskforce concluded that under the current organizational structure, the financial management function is centered upon tracking and documenting what “took place” rather than what “could and should take place” from an analytical cost planning standpoint. NASA has cited deficiencies with its financial management system as a primary reason for not having the necessary data required for both internal management and external reporting purposes. To its credit, NASA recognizes the urgency of successfully implementing an integrated financial management system. The stakes are particularly high, considering this is NASA’s third attempt since 1988 to implement a new system. The first two attempts were abandoned after 12 years and after spending about $180 million. NASA expects to complete the current systems effort by 2006 at a cost of $475 million. The President’s Management Agenda includes improved financial management performance as one of his five governmentwide management goals. In addition, in August 2001, the Principals of the Joint Financial Management Improvement Program—the Secretary of the Treasury, the Director of the Office of Management and Budget, the Director of the Office of Personnel Management, and the Comptroller General—began a series of quarterly meetings that marked the first time all four of the Principals had gathered together in over 10 years. To date, these sessions have resulted in substantive deliberations and agreements focused on key issues such as better defining measures for financial management success. These measures include being able to routinely provide timely, reliable, and useful financial information and having no material internal control weaknesses. Our experience has shown that improvements in several key elements are needed for NASA to effectively address the underlying causes of its financial management challenges. These elements, which will be key to any successful approach to financial management reform, include: addressing NASA’s financial management challenges as part of a comprehensive, integrated, NASA-wide business process reform; providing for sustained leadership by the Administrator to implement needed financial management reforms; establishing clear lines of responsibility, authority, and accountability for such reform tied to the Administrator; incorporating results-oriented performance measures and monitoring tied to financial management reforms; providing appropriate incentives or consequences for action or inaction; establishing an enterprisewide system architecture to guide and direct financial management modernization investments; and ensuring effective oversight and monitoring. In this regard, NASA’s new Administrator comes to the position with a strong management background and expertise in financial management.
In fiscal years 1996 to 2000, the National Aeronautics and Space Administration (NASA) was one of the few agencies that received an unqualified opinion on its financial statements and was in substantial compliance with the Federal Financial Management Improvement Act (FFMIA). This suggested that NASA could generate reliable information for annual external financial reporting and could provide accurate, reliable information for day-to-day decision-making. In contrast with the unqualified or "clean" audit opinions of its previous auditor, Arthur Andersen, NASA's new independent auditor, PricewaterhouseCoopers, disclaimed an opinion on the agency's fiscal year 2001 financial statements because of significant internal control weaknesses. PricewaterhouseCoopers also concluded that NASA's financial management systems do not substantially comply with the requirements of FFMIA. Modernizing NASA's financial management system is essential to providing accurate, useful information for external financial reporting as well as internal management decision-making. NASA is working on an integrated financial management system that it expects to have fully operational in fiscal year 2006 at an estimated cost of $475 million. This is NASA's third attempt to implement a new financial management system. The first two efforts were abandoned after 12 years and $180 million. Given the high stakes involved, NASA's top management must provide the necessary direction, oversight, and sustained attention to ensure the project's success.
To help conduct human-capital planning efforts for the department’s civilian workforce, DOD’s Strategic Human Capital Planning Office used functional community categories to group together employees who perform similar functions. Each of these communities includes a varying number of mission-critical occupations. For the 2010-2018 strategic workforce plan, 11 functional communities provided some information on their 22 mission-critical occupations in an appendix to the plan. Mission- critical occupations are positions key to DOD’s current and future mission requirements, as well as those that present recruiting and retention challenges. Table 1 lists the 11 functional communities along with their mission-critical occupations. DOD’s mandated strategic workforce plans are developed by the Defense Civilian Personnel Advisory Service’s Strategic Human Capital Planning Office, which is within the Office of the Under Secretary of Defense for Personnel and Readiness. To collect data and information from the functional communities, the Strategic Human Capital Planning Office develops a reporting template that it sends to the 11 functional community managers within the Office of the Secretary of Defense (OSD). The template consists of three sections that request information and data on areas such as workforce end-strength forecasts, constraints that impact the ability to meet end-strength targets, status of competency development, and strategies to fill gaps. To complete the template, the functional community managers work with their counterparts at the component level to collect the necessary information and data for the mission-critical occupations. Once the component-level functional community managers collect the necessary information and data, they send their completed templates back to the functional community integrators, who compile all the information and data for each community into one cohesive functional community document. The Strategic Human Capital Planning Office then compiles the various reports from the functional community managers and integrators and issues the report after it passes an internal review. Figure 1 identifies the key offices that develop the strategic workforce plan. Our previous work has found that, in general, DOD’s efforts to develop workforce plans have been mixed. In our February 2009 report, we recommended that DOD develop performance plans for its program offices that have responsibilities to oversee development of the strategic workforce plan. Specifically, we recommended that the performance plans include establishing implementation goals and time frames, measuring performance, and aligning activities with resources to guide its efforts to implement its strategic workforce plan. DOD partially concurred with our recommendations and noted that efforts were underway to develop performance plans. DOD assessed, to varying degrees, the existing and future critical skills and competencies for all but one of its mission-critical occupations, but the department did not assess gaps for most of them. Further, DOD’s report did not include the most up-to-date or timely information when it issued its most recent report. Section 115b of Title 10 of the United States Code requires that DOD’s strategic workforce plan include an assessment of the critical skills and competencies of the existing civilian-employee workforce and DOD, in response to that requirement, assessed to varying degrees the existing critical skills and competencies for 21 of its 22 mission-critical occupations. We have previously reported that it is essential for agencies to determine the skills and competencies that are critical to successfully achieving their missions and goals. This is especially important as changes in national security, technology, budget constraints, and other factors change the environment within which federal agencies operate. The assessments contained in DOD’s 2010-2018 plan, however, varied significantly in terms of the amount of detail provided. This variation can be attributed, in part, to the fact that some communities are able to draw on existing requirements and standards. For example, the information technology functional community used the Clinger-Cohen Competencies for the Information Technology Workforce, which provide a description of the technical competencies for various information technology occupations, to assess the critical skills and competencies within its workforce. Accordingly, the information technology functional community manager was able to provide detail and specificity when describing this community’s assessment processes and the results of those assessments. Similarly, the medical functional community was able to use existing national standards for licensure and board certification for physicians when it assessed particular critical skills and competencies. This community was able to provide specific details about its assessment processes and the results from those assessments. In contrast, while the installations and environment functional community reported on the competency models available for assessing its two mission-critical occupations, firefighters and safety-and-health managers, this community did not provide the results of any analyses using these models. DOD is also required to report on the critical skills and competencies that will be needed in its future workforces. We have previously reported that an agency needs to define the critical skills and competencies that it will require in the future to meet its strategic program goals. Doing so can help an agency align its human-capital approaches that enable and sustain the contributions of all the critical skills and competencies needed for the future. Our assessment of DOD’s 2010-2018 strategic workforce plan found that for 17 of the 22 mission-critical occupations, DOD provided some discussion of future competencies. For the remaining five mission-critical occupations, DOD reported that functional community managers were waiting for the completion of competency models for their specific mission-critical occupations before assessing future competencies. One functional community, Intelligence, did not provide an assessment of skills and competencies for either its existing or future mission-critical occupations. DOD officials told us that the intelligence community’s assessments are maintained in classified documents and could not be provided in the department’s 2010-2018 strategic workforce plan. According to the plan, the Offices of the Under Secretary of Defense for Intelligence and the Under Secretary of Defense for Personnel and Readiness, along with the Office of the Director of National Intelligence, agreed instead to capture the reporting requirements in already established human-capital employment plans that were submitted by intelligence-community elements to the Office of the Director of National Intelligence. Officials from the Office of the Under Secretary of Defense for Intelligence acknowledged that they should provide input into DOD’s strategic workforce plan and stated that they would provide input into the next submission of the plan. DOD officials responsible for developing the strategic workforce plan said they followed a collaborative process and met numerous times to seek input and guidance for developing the plan. To obtain information and data from each functional community, the Strategic Human Capital Planning Office distributed a reporting template to the functional community managers that contained a series of questions related to the requirements of section 115b of Title 10 of the United States Code. For the 2010-2018 plan, from May 2009 through October 2010 the Strategic Human Capital Planning Office provided informal guidance for template development to functional community managers, integrators, and action officers. This office also provided additional training and one-on-one sessions with integrators, and tailored meetings with functional community managers to address completion. The template, however, did not define key terms such as skills and competencies. Accordingly, we found that the functional community managers interpreted the questions within the template differently and developed different understandings of key terms. For example, officials in one functional community explained to us that they viewed skills as a subset of a larger category of competencies. Officials in a separate functional community associated skills with employee capabilities and competencies with occupational descriptions. Without clear guidance for assessing skills and competencies, functional community managers are likely to continue to provide inconsistent responses that vary in detail and usefulness to decision makers. Section 115b of Title 10 of the United States Code requires DOD to include an assessment of gaps in the existing and future civilian employee workforce that should be addressed to ensure that the department has continued access to the critical skills and competencies needed to accomplish its mission. We have previously reported that once an agency identifies the critical skills and competencies that its future workforce must possess, it can develop strategies tailored to address gaps in the number, skills and competencies, and deployment of the workforce. Our analysis found, however, that functional community managers reported conducting competency gap assessments for only 8 of the 22 mission-critical occupations. These 8 occupations include nurses, pharmacists, clinical psychologists, social workers, medical officers, security specialists, police officers, and human-resources managers. Further, in cases where the functional community managers did conduct gap analyses, they did not report the results of these assessments. Officials responsible for developing the 2010-2018 plan told us that they focused on identifying critical-skill gaps based on staffing levels in the mission-critical occupations. According to these officials, competency gaps will be assessed using the Defense Competency Assessment Tool that is scheduled for initial deployment in late fiscal year 2013. In some cases, competency models are still being developed that will enable the functional communities to conduct gap assessments for their mission-critical occupations. For example, the financial-management functional community stated specifically in DOD’s plan that it did not complete a gap assessment because competency models for its mission- critical occupations remain incomplete. The financial-management community reported in the plan that, upon completion of its competency models, it will be able to fully assess gaps in knowledge and skills. DOD officials responsible for the plan told us that they anticipate these models to be completed by the end of 2012. Some functional communities, similarly, are waiting for the completion of an automated competency assessment tool in order to complete their gap assessments. The logistics functional community stated in the plan, for example, that it will use DOD’s Defense Competency Assessment Tool when it becomes available. Because this community, as of September 2010, had more than 18,000 personnel serving in the mission-critical occupation of logistics-management specialist, community officials explained that the workforce is too large to track without an automated tool. Further, some officials attributed the absence of gap analyses to other priorities that took precedence. Officials acknowledged that they did not address all of the statutory requirements and explained that their work on the Secretary of Defense’s 2010 efficiency initiatives—which were introduced to reduce duplication, overhead, and excess—preempted their efforts to develop responses for DOD’s 2010 Strategic Workforce Plan. Finally, of the functional communities that reported completing gap assessments for eight of the mission-critical occupations, none reported the results. For example, the medical functional community reported that DOD’s Medical Health System analyzed a variety of data monthly to ensure goals are met and to assess and respond to gaps for all five of its mission-critical occupations: nurses, pharmacists, clinical psychologists, social workers, and physicians. However, the plan did not report the results of any of these assessments. Our analysis of the template DOD sent to the functional community managers found that it was not clear in all cases that each functional community should report the results of any gap analyses or report the reasons why it could not conduct these assessments—if that is the case—or report timelines for when the assessments would be conducted. Without this information, DOD is limited in its ability to identify where its critical shortages lie so that it may direct limited resources to the areas of highest priority. Under a previous strategic-plan requirement, DOD was required to submit a strategic plan to Congress by January 6, 2007, with updates of that plan to be submitted on March 1 of each subsequent year through 2009. The National Defense Authorization Act for Fiscal Year 2010 repealed this requirement, and enacted section 115b of Title 10 of the United States Code. From October of 2009 until December 2011, section 115b required the submission of the plan on an annual basis, rather than on any specific date. In December of 2011, the National Defense Authorization Act for Fiscal Year 2012 amended section 115b to make the strategic plan a biennial requirement, rather than an annual one. Our analysis shows, based on these requirements, that DOD’s first three submissions were 304, 115, and 395 days late, respectively. Additionally, while DOD issued its third strategic workforce plan on March 31, 2010, the department issued its fourth, and most recent, plan 24 months later on March 27, 2012. When DOD began development of its fourth plan, the department was required to submit its workforce plan on an annual basis; by the time DOD issued the plan the reporting requirement had been revised from an annual to a biennial requirement. We note, however, that DOD’s report was already at least 8 months overdue at the time of that revision. Further, while DOD delayed issuance of its fourth plan until March 2012, it continued to use fiscal year 2010 data as its baseline. Figure 2 shows the number of days each of DOD’s strategic workforce plans has been late since 2007. Officials attributed the delays in the production of DOD’s 2010 strategic workforce plan to long internal processing times and staff turnover. According to these officials, the plan’s progress was affected by turnover among contractor personnel as well as the leadership and staff within the strategic workforce planning office at DOD. DOD recognized these delays, and the Deputy Assistant Secretary for Defense for Civilian Personnel Policy testified before the House Armed Services Committee in July 2011 that the 2010-2018 report would be issued in late August 2011. However, it remained in draft form for another 7 months and was not issued until March 2012. Our prior work on internal control standards for the federal government has shown that agencies rely on timely information to carry out their responsibilities. For an agency to manage and control its operations effectively, it must have relevant, reliable, and timely communications relating to internal as well as external events. We found that although the Strategic Human Capital Planning Office provided suggested timeframes, DOD officials did not adhere to this schedule. Without up-to-date information, decision makers do not have relevant information for managing the critical needs of the federal workforce in a timely manner. Officials responsible for the plan told us they anticipate issuing the 2012 strategic workforce plan between July and September of 2013. DOD, in its 2010-2018 workforce plan, did not include an assessment of the appropriate mix of military, civilian, and contractor personnel or an assessment of the capabilities of each of these workforces. Section 115b of Title 10 of the United States Code requires DOD to conduct an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities. To compile the workforce mix data, DOD officials responsible for the plan developed and distributed a reporting template to be completed by functional community managers. This template requested the functional community managers to provide the percentages of DOD civilian personnel, military personnel, and contractors in each mission-critical occupation. Additionally, the template requested each functional community to provide information on the desired workforce mix for fiscal year 2016, and interim goals if possible. Our review found that 2 of the 11 functional communities provided the mix of their workforces, while 9 communities provided partial or no data. Specifically, the medical and human-resources functional communities provided the percentages of military, civilian, and contractor personnel for their current workforce, and reported their desired mix for fiscal year 2016, as the template requested. For example, the medical functional community provided workforce mix data for its military, civilian, and contractor personnel in each of its mission-critical occupations. According to officials responsible for the strategic workforce plan, the medical functional community was able to provide workforce mix data because the community already tracked personnel data as a way to maintain oversight. Conversely, the logistics and information-technology functional communities provided only the military and civilian workforce data and did not include contractor data. The intelligence functional community did not provide any workforce mix data for inclusion in the 2010 strategic workforce plan. Moreover, data on contractor personnel was incomplete. During this review, DOD officials responsible for the plan stated that they have difficulties tracking contractor data, explaining that DOD contracts for services rather than for individuals. We note, however, that this issue is not new. DOD guidance requires defense officials to consider personnel costs, among other factors, when making certain workforce decisions. For example, a February 2005 DOD directive states that missions shall be accomplished using the least costly mix of personnel (military, civilian, and contract) consistent with military requirements and other needs of the department. Subsequently, in April 2010, DOD issued an instruction that included guidance on implementing the prior directive. Further it is DOD policy that DOD components follow prescribed business rules when performing an economic analysis in support of workforce decisions. These rules apply when, among other circumstances, DOD’s components decide whether to use DOD civilians to perform functions that are currently being performed by contractors but could be performed by DOD civilians. By law, DOD is required to annually compile and review an inventory of activities performed pursuant to contracts to help provide better insights into the number of contractor full-time equivalents providing services to the department, and the functions they are performing. Additionally, the National Defense Authorization Act for Fiscal Year 2012 requires appropriate DOD officials to develop a plan, including an enforcement mechanism, to ensure that this inventory of contracted services is used to inform strategic workforce planning, among other things. The act also directed the Secretary of Defense to establish policies and procedures for determining the most appropriate and cost-efficient mix of military, civilian, and contractor personnel to perform the mission of the department. Further, the act directed that these policies and procedures should specifically require DOD to use the strategic workforce plan, among other things, when making these determinations, and that these policies and procedures, once developed, should inform the strategic workforce planning process. Earlier this year, we reported that DOD has difficulty collecting data on the number of contractors performing work, and that DOD is working on a means to collect the data. We also reported that DOD has submitted to Congress a plan to collect personnel data directly from contractors that would help inform the department of the number of full-time-equivalent contractor staff. According to this plan, DOD will institute a phased-in approach to develop an inventory of contracted services database by fiscal year 2016. In the meantime, the functional communities did not provide all required information, in part, because the department did not request it. The template, for example, did not ask the functional communities to report the capabilities of their civilian, military and contractor personnel in mission-critical occupations, and as a result none of the functional communities reported them. Further, the template did not ask the functional communities to report on their assessments of the appropriate mix of these workforces within their communities and, accordingly, none of the communities provided this type of assessment. Without a complete assessment, it is difficult for DOD to know if its civilian workforce is properly sized to carry out its vital missions. DOD developed five performance measures to assess progress in implementing its strategic workforce plan, and the measures generally align with the department’s goals. We have previously reported that performance measures should align with goals and track progress toward the goals of the organization. However, it is not clear in all cases how these measures will help DOD demonstrate progress in meeting all of the reporting requirements contained in section 115b of Title 10 of the United States Code. In response to statutory requirements, DOD developed five results- oriented performance measures to assess progress in implementing its strategic workforce plan. Specifically, the Office of the Undersecretary of Defense for Personnel and Readiness developed five baseline performance measures, which address: workforce-mission readiness (the percentage of managers reporting that they have the talent needed to meet their mission); mission-critical occupations’ end-strength (the percentage difference between the actual end-strength and the target end-strength for mission-critical occupations); key milestones (the percentage of key milestones met by each mission-critical occupation); competency-model development (the number of competency models developed for mission-critical occupations); and loss rates for new hires (18-month loss rate from hiring date for new federal-civilian hires in mission-critical occupations). According to the 2010-2018 plan, the Office of the Undersecretary of Defense for Personnel and Readiness based the first four of these measures—workforce mission readiness, mission-critical occupations’ end-strength, key milestones, and competency-model development—on goals identified in DOD’s companion document to its overall civilian human-capital strategic plan. According to the plan, officials developed the fifth measure—loss rates for new hires—to support the overall strategic plan for the Office of the Undersecretary of Defense for Personnel and Readiness. Collectively, these baseline measures were established relative to the strategic objectives set for tracking and supporting organizational decision making within the Office of the Undersecretary of Defense for Personnel and Readiness. We have previously reported that performance measures should align with goals and track progress toward the goals of the organization. Additionally, OPM best practices state that performance measures can help drive desired behavior, provide direction, and enable an organization to test its progress in achieving goals. Accordingly, DOD developed the five measures to meet goals and objectives identified in a key DOD strategic document. All five performance measures include targets to track progress toward goals—such as a 70 percent target for key milestones in mission-critical occupations—so that the results of any progress can be easily compared to the targets. Additionally, the performance measures are quantifiable. For example, one of the performance measures establishes a 15 percent variance between the actual end-strength and the target end-strength of mission-critical occupations. While DOD introduced these measures for the first time in its 2010-2018 strategic workforce plan, the department conducted preliminary assessments of its progress against those measures. In this plan, DOD reported in its preliminary observations that it has met two performance measures—key milestones and competency-model development—and partially met two other measures—workforce-mission readiness and the end-strength of mission-critical occupations. For example, according to OSD’s preliminary assessment, more than half of the mission-critical occupations were within the 15 percent variance. DOD will use the fifth performance measure, which addresses loss rates for new hires, to assess the department’s progress in implementing the plan in the next strategic workforce planning cycle. We have previously recommended that DOD develop a performance plan that includes establishing implementation goals and time frames, measuring performance, and aligning activities with resources. While the performance measures that DOD established to monitor the department’s progress in implementing its strategic workforce plan generally align with departmental goals and priorities, it is not clear in all cases how the five measures will help DOD demonstrate progress in meeting all the reporting requirements contained in section 115b of Title 10 United States Code. While DOD is not required to develop performance measures that monitor progress in meeting the statutory requirements, our prior work has shown that agencies that have been successful in measuring their performance generally developed measures that are responsive to multiple priorities and complement different program strategies. Additionally, DOD is required to develop performance measures to monitor progress in implementing the strategic workforce plan, and the plan itself states that one of its goals is to make progress toward meeting the statutory requirements. Section 115b of Title 10 of the United States Code requires DOD to include in its 2010-2018 strategic workforce plan an assessment of, among other things, gaps in the existing and future civilian workforce that should be addressed to ensure that the department has continued access to the critical skills and competencies, and the appropriate mix of military, civilian, and contractor personnel capabilities. During this review, we found, as we reported earlier in this report, that the department did not conduct comprehensive assessments in these two areas. Although one of DOD’s performance measures—key milestones—identifies assessments of competency gaps and workforce mix as key milestones, the plan does not describe how the department assessed progress in these areas or interim steps as to how it plans to meet these milestones. As a result, it is unclear how this measure addresses DOD’s progress in implementing the portions of the plan related to these two requirements, and how the performance measures and the department’s efforts align with and address congressional requirements. According to prior GAO work, performance measures should align with and indicate progress toward the goals of the organization. Without a clear, effective alignment of DOD’s performance measures with United States Code requirements, the department will not be in the best position to measure and report how it is meeting its congressional requirements. With about a third of DOD’s civilian workforce eligible to retire by 2015— during a time of changing national security threats and challenging fiscal realities—it is imperative that decision makers in DOD and Congress have access to complete and timely information on the skills, competencies, and any associated gaps within DOD’s civilian workforce. However, because the office responsible for developing the plan did not provide sufficiently detailed guidance to the managers who were responsible for providing key data, the information in the current plan on skills and competencies varies significantly. Further, while DOD officials have stated that they do not have the necessary tools in place to conduct gap analyses across the board, the department has not reported the results of any gap analyses that it has conducted nor provided reporting timeframes for conducting remaining gap analyses; this situation diminishes the plan’s utility as a workforce planning document. To the extent that DOD provided data in 2012, the data was based on information from 2010, which further limits this document’s use for planning purposes. When the reports use dated information, decision makers do not have relevant information for managing the critical needs of the federal workforce. Further, DOD did not collect all required information for its 2010-2018 strategic workforce plans, including the number or percentage of military, civilian, and contractor personnel and the capabilities for those three workforces. Without revised guidance specifying the need to collect all information required for a complete assessment to determine the appropriate mix of the three workforces, DOD will have difficulty in determining if its civilian workforce is properly sized to carry out essential missions. Finally, where DOD has identified performance measures and indicated progress toward the goals of the strategic plan, those measures are not, in all cases, aligned with DOD’s congressionally mandated reporting requirements; also, the measures do not provide detail about how DOD plans to meet those requirements, making it difficult for DOD to demonstrate progress. To meet the congressional requirement to conduct assessments of critical skills, competencies, and gaps for both existing and future civilian workforces, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to include in the guidance that it disseminates for developing future strategic workforce plans clearly defined terms and processes for conducting these assessments. To help ensure that Congress has the necessary information to provide effective oversight over DOD’s civilian workforce, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to conduct competency gap analyses for DOD’s mission- critical occupations and report the results. When managers cannot conduct such analyses, we recommend that DOD report a timeline in the strategic workforce plan for providing these assessments. To help ensure that the data presented in DOD’s strategic workforce plans are current and timely, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to establish and adhere to timelines that will ensure issuance of future strategic workforce plans in accordance with statutory timeframes. To enhance the information that DOD provides Congress in its strategic workforce plan, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness to provide guidance for developing future strategic workforce plans that clearly directs the functional communities to collect information that identifies not only the number or percentage of personnel in its military, civilian, and contractor workforces but also the capabilities of the appropriate mix of those three workforces. To better develop and submit future DOD strategic workforce plans, we recommend that the Secretary of Defense direct the Office of the Under Secretary of Defense for Personnel and Readiness to enhance the department’s results-oriented performance measures by revising existing measures or developing additional measures that will more clearly align with DOD’s efforts to monitor progress in meeting the strategic workforce planning requirements in section 115b of Title 10 of the United States Code. In written comments on a draft of this report, DOD concurred with our first recommendation and partially concurred with the remaining four recommendations. DOD comments are reprinted in appendix II. While DOD acknowledged that we had conducted a thorough review and assessment of DOD’s Fiscal Year 2010-2018 Strategic Workforce Plan for, DOD also expressed its disappointment that we did not appear to give the department credit for the major progress that it has made, including actions to reframe its planning progress from the current state to a comprehensive future state by 2015. Further, DOD stated that the overall negative tone – in its opinion – overshadowed the monumental efforts of the department. We disagree. The objectives in our final report are consistent with the objectives we presented to DOD when we first notified the department of our review at the beginning of this engagement, and we did provide positive examples where DOD had responded to congressional direction, especially as those actions related to our report’s objectives. For example, we state clearly in our report, among other things, that DOD assessed to varying degrees the existing and future critical skills and competencies for all but one of its mission-critical occupations. This has been a longstanding issue and represents progress. Further, we reported that DOD developed performance measures to assess progress in implementing its workforce plan. This was a new reporting requirement for DOD, and we reported that DOD had been responsive to this requirement. DOD also asserted that our recommendations simply restate areas for improvement that the department already identified in its plan, and which have already been implemented since the plan was published. We note, however, that these issues are not new. We first reported on DOD’s strategic workforce planning for its civilian workforce in 2004. Subsequently, Congress mandated that DOD develop and submit civilian workforce strategic plans to the congressional defense committees, and that we conduct our own independent assessment of those plans. We have previously conducted 3 reviews of DOD’s plans since 2008 and our work has reported mixed results. We recommended in 2008, for example, that DOD address all of its statutory reporting requirements, and note that DOD did not concur with this recommendation. (In 2010, we reported that DOD’s civilian workforce plan addressed 5 and partially addressed 9 of DOD’s 14 legislative requirements.) In 2009, we recommended, among other things, that DOD develop a performance plan that includes establishing implementation goals and timeframes, measuring performance, and aligning activities with resources. DOD partially concurred with these recommendations. Given the response by DOD to our previous reports and recommendations on these issues, we have reviewed the recommendations that we present in this report and continue to believe that corrective action is needed. DOD concurred with our first recommendation to include in guidance that it disseminates for developing future workforce plans clearly defined terms and processes for conducting these assessments. DOD stated in its agency comments, among other things, that it has already provided numerous governance and policy documents, and more, to assist key stakeholders in meeting strategic workforce plan reporting requirements. We make similar statements in our report. DOD also stated in its agency comments, however, that the department strives for continuous improvement and has already provided additional guidance for the next planning cycle, for this reason they believe no additional direction from the Secretary of Defense is needed. We do not disagree that DOD strives for continuous improvement. However, during the course of our audit work, we found, as we state in our report, that functional community managers interpreted questions in DOD’s guiding template differently and developed different understandings of key terms. Therefore, we continue to believe that this recommendation will enhance the development of DOD’s next strategic workforce plan. DOD partially concurred with our second recommendation that DOD conduct gap analyses for DOD’s mission-critical occupations and report on the results, and, when managers cannot conduct such analyses, report a timeline for providing these assessments. DOD also stated its belief that no additional direction from the Secretary of Defense is needed with regard to this recommendation. In its agency comments, DOD stated that the department focused on the identification of critical skill gaps based on staffing levels in its mission-critical occupations. We agree that DOD’s plan includes these data. However, we reported that DOD is required to include an assessment of competency gaps in its existing and future civilian employee workforces, and that our analyses found that DOD’s functional community managers reported conducting gap assessments for only 8 of DOD’s 22 mission-critical occupations. Therefore we continue to believe that DOD needs to conduct these analyses and, for clarity, we added references to competency gap analyses in our finding and recommendation, as appropriate. DOD stated in its agency comments that competency gaps will be assessed in the future. DOD also partially concurred with our third recommendation that DOD establish and adhere to timelines that will ensure issuance of future strategic workforce plans in accordance with statutory timeframes but, similarly to its other responses, added that no additional direction is needed from the Secretary of Defense at this time. In its comments, DOD stated that the department does have an established planning process and timeline, and that this established process aligns with the budget cycle and takes about a year to complete because of the size and complexity of the department. DOD added that the planning cycle timeline is flexible enough to allow for significant events, among other things, and provided a notional strategic workforce plan timeline as an attachment to its agency comments. However, we continue to believe it is key that DOD take steps to adhere to the timelines it establishes to meet congressional reporting requirements and enhance the utility of its future reports. As we note in our report, DOD has issued all of its strategic workforce plans late since 2007. Regarding our fourth recommendation, DOD also partially concurred that DOD provide guidance for developing future workforce plans that clearly directs the functional communities to collect information that identifies not only the number or percentage of personnel in its military, civilian, and contractor workforces but also the capabilities of the appropriate mix of those three workforces. While DOD agreed that additional improvements are necessary, the department again stated that it did not believe additional direction is necessary from the Secretary of Defense. In its comments, DOD stated that is preparing to pilot a capabilities-based approach to assess civilian and military workforce and contract support. We continue to note DOD’s existing requirement to conduct an assessment of the appropriate mix of military, civilian, and contractor personnel capabilities, and we look forward to seeing the results of DOD’s pilot program. Finally, DOD also partially concurred with our fifth recommendation that the department enhance its results-oriented performance measures by revising existing measures or developing additional measures that will more clearly align with DOD’s efforts to monitor progress in meeting the strategic workforce planning requirements contained in statute. However, again, DOD did not believe any additional direction from the Secretary of Defense was needed. In its response, DOD stated that the measures in the fiscal year 2010-2018 strategic workforce plan do assess progress both in implementing the strategic workforce plan and in meeting the statutory requirements and, as an attachment to its comments, provided a matrix—that it developed in response to our draft report—to show linkages between the two. Based on the matrix, we agree with DOD’s assertion that some alignment does exist between the performance measures and the statutory criteria. However, the justification that DOD provided in its matrix for demonstrating these linkages is not always clear. Further, DOD did not include this analysis in its plan. We did not state in our report that the performance measures that DOD developed were inappropriate in some way. However, our analysis did find that DOD continues to struggle to meet its statutory reporting requirements. Therefore, we continue to believe that DOD can enhance its performance measures by more clearly aligning them to those requirements. We are sending copies of this report to the Secretary of Defense, the Under Secretary of Defense for Personnel and Readiness, and appropriate congressional committees. In addition, this report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. For all three objectives, we evaluated DOD’s 2010-2018 Strategic Workforce Plan and supporting documentation. We also interviewed Department of Defense (DOD) officials responsible for developing the Strategic Workforce Plan. These include officials from the Strategic Human Capital Planning Office and the Defense Civilian Personnel Advisory Service within the Office of the Under Secretary of Defense for Personnel and Readiness, the Office of the Under Secretary of Defense for Intelligence, and the military departments. We also met with functional community managers in the information technology, financial management, logistics, and law enforcement communities to determine how each of these communities conducted their strategic workforce planning and how coordination occurred between the various levels of DOD. We selected these four functional communities because they represent three of the largest and one of the smallest functional communities included in the plan. Further, DOD business-systems modernization (information technology), financial management, and DOD supply-chain management (logistics) are on GAO’s High-Risk list. To aid in all aspects of our review, we also met with Office of Personnel Management (OPM) officials to identify relevant policy or guidance to federal agencies. Finally, we found the data contained in DOD’s 2010- 2018 plan to be sufficiently reliable for purposes of assessing efforts in developing and producing civilian strategic workforce plans and providing context of these efforts. To determine the extent to which DOD assessed existing and future critical skills, competencies, and gaps in its civilian workforce, we reviewed information and data contained in DOD’s 2010-2018 strategic workforce plan to identify which of the functional communities completed these assessments, the methods and tools that the functional communities used to conduct the assessments, and the extent to which the functional communities reported the results of their assessments. We obtained and reviewed existing DOD guidance, including guidance related to any automated systems the department may use to facilitate these assessments. We also obtained and reviewed OPM guidance on conducting assessments of the skills, competencies, and gaps of the federal civilian workforces. This included a review of documents to ascertain how DOD used OPM’s Workforce Analysis Support System and Civilian Forecasting System to develop the department’s civilian- workforce forecasts and projections. Finally, to evaluate the timeliness of DOD’s submissions of its strategic workforce plans, we reviewed GAO’s prior work on DOD’s previous plans as well as our work on internal control standards. To determine the extent to which DOD assessed its workforces to identify the appropriate mix of military, civilian, and contractor personnel capabilities, we reviewed information and data contained in DOD’s 2010- 2018 strategic workforce plan to identify which functional communities assessed their workforce mix and the process those communities used to carry out their assessments. We also analyzed DOD’s plan to determine the extent to which the plan included an evaluation of the specific capabilities of military, civilian, and contractor personnel. Additionally, we obtained and reviewed DOD guidance on conducting assessments of the appropriateness of the mix of workforces in the federal government. To determine the extent to which DOD assessed its progress in implementing its strategic workforce plan by using results-oriented performance measures, we reviewed DOD’s 2010-2018 strategic workforce plan to identify the performance measures DOD chose to assess its implementation of its plan. We also obtained and reviewed DOD and OPM guidance on using results-oriented performance measures and then evaluated DOD’s efforts to apply such guidance. We evaluated DOD’s results-oriented performance measures and compared them to the statutory requirements for the plan as identified in the section 115b of Title 10 of the United States Code to determine the extent to which the measures developed addressed the requirements. We also evaluated the performance measures using best practices identified in our previous work to determine their validity and appropriateness. We conducted this performance audit from July 2011 to September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov. In addition to the individual named above, Marion Gatling, Assistant Director; David Moser, Assistant Director; Jerome Brown; Julie Corwin; Brian Pegram; Richard Powelson, Courtney Reid; Terry L. Richardson; Norris Smith; Jennifer Weber; and Michael Willems made key contributions to this report. Human Capital: Complete Information and More Analysis Needed to Enhance DOD’s Civilian Senior Leader Strategic Workforce Plan. GAO-12-990R. Washington, D.C.: September 19, 2012. DOD Civilian Workforce: Observations on DOD’s Efforts to Plan for Civilian Workforce Requirements. GAO-12-962T. Washington, D.C.: July 26, 2012. Defense Acquisition Workforce: Improved Processes, Guidance, and Planning Needed to Enhance Use of Workforce Funds. GAO-12-747R. Washington, D.C.: June 20, 2012. Defense Acquisitions: Further Actions Needed to Improve Accountability for DOD’s Inventory of Contracted Services. GAO-12-357. Washington, D.C.: April 6, 2012. Defense Workforce: DOD Needs to Better Oversee In-sourcing Data and Align In-sourcing Efforts with Strategic Workforce Plans. GAO-12-319. Washington, D.C.: February 9, 2012. DOD Civilian Personnel: Competency Gap Analyses and Other Actions Needed to Enhance DOD’s Strategic Workforce Plans. GAO-11-827T. Washington, D.C.: July 14, 2011. High-Risk Series: An Update. GAO-11-278. Washington, D.C.: February 16, 2011. Human Capital: Opportunities Exist for DOD to Enhance Its Approach for Determining Civilian Senior Leader Workforce Needs. GAO-11-136. Washington, D.C.: November 4, 2010. Human Capital: Further Actions Needed to Enhance DOD’s Civilian Strategic Workforce Plan. GAO-10-814R. Washington, D.C.: September 27, 2010. Human Capital: Opportunities Exist to Build on Recent Progress to Strengthen DOD’s Civilian Human Capital Strategic Plan. GAO-09-235. Washington, D.C.: February 10, 2009. Human Capital: The Department of Defense’s Civilian Human Capital Strategic Plan Does Not Meet Most Statutory Requirements. GAO-08-439R. Washington, D.C.: February 6, 2008. DOD Civilian Personnel: Comprehensive Strategic Workforce Plans Needed. GAO-04-753. Washington, D.C.: June 30, 2004. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management. GAO/T-GGD/NSIAD-00-120. Washington, D.C.: March 9, 2000.
As of June 2012, DOD reported a full-time civilian workforce of about 780,000 personnel. According to DOD, about 30 percent of its civilian workforce and 60 percent of its civilian senior leaders will be eligible to retire by March 31, 2015. Such potential loss may result in significant skill gaps. The National Defense Authorization Act for Fiscal Year 2010 requires GAO to submit a report on DOD's 2010-2018 strategic civilian workforce plan. In response, GAO determined the extent to which DOD identified critical skills, competencies, and gaps; assessed its workforce mix; and measured progress in implementing its strategic workforce plan. GAO analyzed DOD's strategic workforce plan and supporting documents, and met with managers of four functional communities within the civilian personnel community (information technology, financial management, logistics, and law enforcement), because they represent the three largest and the one smallest of the functional communities, to determine how they conducted their strategic workforce planning. Over the last decade, Congress has passed legislation requiring the Department of Defense (DOD) to conduct human capital planning efforts for the department's civilian workforce. Specifically, section 115b of Title 10 of the United States Code, enacted in October 2009, requires DOD to develop and submit to congressional defense committees a strategic workforce plan to shape and improve the department's civilian workforce. Among other things, the law requires DOD to report on the mission-critical skills, competencies, and gaps in its existing and future civilian workforces; the appropriate mix of military, civilian, and contractor personnel capabilities; and the department's progress in implementing its strategic workforce plan using results-oriented performance measures. While DOD has addressed some of its reporting requirements to some extent, it has not addressed others. DOD, to varying degrees, assessed the existing and future critical skills and competencies for 21 of the 22 occupations that it has identified as mission critical, but conducted competency gap assessments only for 8 of these 22 occupations. In some but not all cases, DOD provided details about skills and competencies. However, it did not report the results of any of its gap analyses for its mission-critical occupations. DOD did not assess the appropriate mix of military, civilian, and contractor workforces or provide an assessment of the capabilities of each of these workforces. Only two of the civilian community managers who provided input presented data on all three workforces. The remaining nine community managers provided data only on military and civilian personnel. DOD guidance requires, among other things, that DOD missions be accomplished with the least costly mix of military, civilian, and contractor personnel, consistent with military requirements and other needs of the department. DOD assessed progress in implementing its strategic workforce plan by using newly developed measures that contain characteristics of valid results-oriented performance measures, but these measures are not aligned with DOD's statutory reporting requirements. For example, although DOD is required to conduct gap analyses and assess its workforce mix, it is unclear how the measures that DOD developed will help to address these requirements. The input to DOD's strategic workforce plan on critical skills and competencies varied, in part, because the reporting template that DOD sent to its civilian personnel community managers did not contain sufficient detail and clear definitions. Also, the template did not provide departmental expectations for conducting gap analyses or communicate clear guidance for reporting on workforce mix assessments. Without sufficiently detailed guidance to help ensure complete reporting, input into future plans will continue to vary and the plan's usefulness as a workforce planning document will be diminished. Further, in those cases where DOD's performance measures are not aligned with its congressionally mandated reporting requirements, it is difficult for DOD to demonstrate progress against those requirements. GAO recommendations include that DOD issue clearer guidance for assessing its skills and competencies, conduct and report on gap analysis of mission-critical occupations, clarify its guidance for assessing workforce mix issues, and enhance its performance measures to align with congressionally mandated reporting requirements. DOD concurred or partially concurred with GAO's recommendations. While DOD raised some issues about the need for further actions, GAO continues to believe that DOD's workforce planning could be enhanced.
Dramatic increases in computer interconnectivity, especially in the use of the Internet, continue to revolutionize the way our government, our nation, and much of the world communicate and conduct business. The benefits have been enormous. Vast amounts of information are now literally at our fingertips, facilitating research on virtually every topic imaginable; financial and other business transactions can be executed almost instantaneously, often 24 hours a day; and electronic mail, Internet Web sites, and computer bulletin boards allow us to communicate quickly and easily with a virtually unlimited number of individuals and groups. However, in addition to such benefits, this widespread interconnectivity poses significant risks to the government’s and our nation’s computer systems and, more important, to the critical operations and infrastructures they support. For example, telecommunications, power distribution, water supply, public health services, national defense (including the military’s warfighting capability), law enforcement, government services, and emergency services all depend on the security of their computer operations. The speed and accessibility that create the enormous benefits of the computer age on the other hand, if not properly controlled, allow individuals and organizations to inexpensively eavesdrop on or interfere with these operations from remote locations for mischievous or malicious purposes, including fraud or sabotage. Table 1 summarizes the key threats to our nation’s infrastructures, as observed by the Federal Bureau of Investigation (FBI). Government officials are increasingly concerned about attacks from individuals and groups with malicious intent, such as crime, terrorism, foreign intelligence gathering, and acts of war. According to the FBI, terrorists, transnational criminals, and intelligence services are quickly becoming aware of and using information exploitation tools such as computer viruses, Trojan horses, worms, logic bombs, and eavesdropping sniffers that can destroy, intercept, degrade the integrity of, or deny access to data. In addition, the disgruntled organization insider is a significant threat, since these individuals often have knowledge that allows them to gain unrestricted access and inflict damage or steal assets without possessing a great deal of knowledge about computer intrusions. As greater amounts of money are transferred through computer systems, as more sensitive economic and commercial information is exchanged electronically, and as the nation’s defense and intelligence communities increasingly rely on commercially available information technology (IT), the likelihood increases that information attacks will threaten vital national interests. As the number of individuals with computer skills has increased, more intrusion or “hacking” tools have become readily available and relatively easy to use. A hacker can literally download tools from the Internet and “point and click” to start an attack. Experts also agree that there has been a steady advance in the sophistication and effectiveness of attack technology. Intruders quickly develop attacks to exploit vulnerabilities discovered in products, use these attacks to compromise computers, and share them with other attackers. In addition, they can combine these attacks with other forms of technology to develop programs that automatically scan the network for vulnerable systems, attack them, compromise them, and use them to spread the attack even further. Along with these increasing threats, the number of computer security incidents reported to the CERT® Coordination Center has also risen dramatically from 9,859 in 1999 to 52,658 in 2001 and 82,094 in 2002. And these are only the reported attacks. The Director of CERT Centers stated that he estimates that as much as 80 percent of actual security incidents goes unreported, in most cases because (1) the organization was unable to recognize that its systems had been penetrated or there were no indications of penetration or attack, or (2) the organization was reluctant to report. Figure 1 shows the number of incidents reported to the CERT Coordination Center from 1995 through 2002. According to the National Security Agency, foreign governments already have or are developing computer attack capabilities, and potential adversaries are developing a body of knowledge about U.S. systems and methods to attack these systems. Since the terrorist attacks of September 11, 2001, warnings of the potential for terrorist cyber attacks against our critical infrastructures have also increased. For example, in February 2002, the threat to these infrastructures was highlighted by the Special Advisor to the President for Cyberspace Security in a Senate briefing when he stated that although to date none of the traditional terrorists groups, such as al Qaeda, have used the Internet to launch a known assault on the United States’ infrastructure, information on water systems was discovered on computers found in al Qaeda camps in Afghanistan. Also, in his February 2002 statement for the Senate Select Committee on Intelligence, the director of central intelligence discussed the possibility of cyber warfare attack by terrorists. He stated that the September 11 attacks demonstrated the nation’s dependence on critical infrastructure systems that rely on electronic and computer networks. Further, he noted that attacks of this nature would become an increasingly viable option for terrorists as they and other foreign adversaries become more familiar with these targets and the technologies required to attack them. Since September 11, 2001, the critical link between cyberspace and physical space has been increasingly recognized. In his November 2002 congressional testimony, the Director of the CERT Centers at Carnegie- Mellon University noted that supervisory control and data acquisition (SCADA) systems and other forms of networked computer systems have been used for years to control power grids, gas and oil distribution pipelines, water treatment and distribution systems, hydroelectric and flood control dams, oil and chemical refineries, and other physical systems, and that these control systems are increasingly being connected to communications links and networks to reduce operational costs by supporting remote maintenance, remote control, and remote update functions. These computer-controlled and network-connected systems are potential targets for individuals bent on causing massive disruption and physical damage, and the use of commercial, off-the-shelf technologies for these systems without adequate security enhancements can significantly limit available approaches to protection and may increase the number of potential attackers. The risks posed by this increasing and evolving threat are demonstrated in reports of actual and potential attacks and disruptions. For example: On February 11, 2003, the National Infrastructure Protection Center (NIPC) issued an advisory to heighten the awareness of an increase in global hacking activities as a result of the increasing tensions between the United States and Iraq. This advisory noted that during a time of increased international tension, illegal cyber activity often escalates, such as spamming, Web page defacements, and denial-of-service attacks. Further, this activity can originate within another country that is party to the tension; can be state sponsored or encouraged; or can come from domestic organizations or individuals independently. The advisory also stated that attacks may have one of several objectives, including political activism targeting Iraq or those sympathetic to Iraq by self-described “patriot” hackers, political activism or disruptive attacks targeting United States systems by those opposed to any potential conflict with Iraq, or even criminal activity masquerading or using the current crisis to further personal goals. According to a preliminary study coordinated by the Cooperative Association for Internet Data Analysis (CAIDA), on January 25, 2003, the SQL Slammer worm (also known as “Sapphire”) infected more than 90 percent of vulnerable computers worldwide within 10 minutes of its release on the Internet, making it the fastest computer worm in history. As the study reports, exploiting a known vulnerability for which a patch has been available since July 2002, Slammer doubled in size every 8.5 seconds and achieved its full scanning rate (55 million scans per second) after about 3 minutes. It caused considerable harm through network outages and such unforeseen consequences as canceled airline flights and automated teller machine (ATM) failures. Further, the study emphasizes that the effects would likely have been more severe had Slammer carried a malicious payload, attacked a more widespread vulnerability, or targeted a more popular service. In November 2002, news reports indicated that a British computer administrator was indicted on charges that he broke into 92 U.S. computer networks in 14 states; these networks belonged to the Pentagon, private companies, and the National Aeronautics and Space Administration during the past year, causing some $900,000 in damage to computers. According to a Justice Department official, these attacks were one of the biggest hacks ever against the U.S. military. This official also said that the attacker used his home computer and automated software available on the Internet to scan tens of thousands of computers on U.S. military networks looking for ones that might suffer from flaws in Microsoft Corporation’s Windows NT operating system software. On October 21, 2002, NIPC reported that all the 13 root-name servers that provide the primary roadmap for almost all Internet communications were targeted in a massive “distributed denial of service” attack. Seven of the servers failed to respond to legitimate network traffic, and two others failed intermittently during the attack. Because of safeguards, most Internet users experienced no slowdowns or outages. In July 2002, NIPC reported that the potential for compound cyber and physical attacks, referred to as “swarming attacks,” is an emerging threat to the U.S. critical infrastructure. As NIPC reports, the effects of a swarming attack include slowing or complicating the response to a physical attack. For example, cyber attacks can be used to delay the notification of emergency services and to deny the resources needed to manage the consequences of a physical attack. In addition, a swarming attack could be used to worsen the effects of a physical attack. For instance, a cyber attack on a natural gas distribution pipeline that opens safety valves and releases fuels or gas in the area of a planned physical attack could enhance the force of the physical attack. Consistent with this threat, NIPC also released an information bulletin in April 2002 warning against possible physical attacks on U.S. financial institutions by unspecified terrorists. In August 2001, we reported to a subcommittee of the House Government Reform Committee that the attacks referred to as Code Red, Code Red II, and SirCam had affected millions of computer users, shut down Web sites, slowed Internet service, and disrupted business and government operations. Then in September 2001, the Nimda worm appeared using some of the most significant attack profile aspects of Code Red II and 1999’s infamous Melissa virus that allowed it to spread widely in a short amount of time. Security experts estimate that Code Red, Sircam, and Nimda have caused billions of dollars in damage. For the federal government, we have reported since 1996 that poor information security is a widespread problem with potentially devastating consequences. Although agencies have taken steps to redesign and strengthen their information system security programs, our analyses of information security at major federal agencies have shown that federal systems were not being adequately protected from computer-based threats, even though these systems process, store, and transmit enormous amounts of sensitive data and are indispensable to many federal agency operations. For the past several years, we have analyzed audit results for 24 of the largest federal agencies and found that all 24 had significant information security weaknesses. Further, we have identified information security as a governmentwide high-risk issue in reports to the Congress since 1997—most recently in January 2003. As we reported in November 2002, our analyses of reports issued from October 2001 through October 2002, continued to show significant weaknesses in federal computer systems that put critical operations and assets at risk. Weaknesses continued to be reported in each of the 24 agencies included in our review, and they covered all six major areas of general controls—the policies, procedures, and technical controls that apply to all or a large segment of an entity’s information systems and help ensure their proper operation. These six areas are (1) security program management, which provides the framework for ensuring that risks are understood and that effective controls are selected and properly implemented; (2) access controls, which ensure that only authorized individuals can read, alter, or delete data; (3) software development and change controls, which ensure that only authorized software programs are implemented; (4) segregation of duties, which reduces the risk that one individual can independently perform inappropriate actions without detection; (5) operating systems controls, which protect sensitive programs that support multiple applications from tampering and misuse; and (6) service continuity, which ensures that computer-dependent operations experience no significant disruptions. Figure 2 illustrates the distribution of weaknesses for the six general control areas across the 24 agencies. Although our analyses showed that most agencies had significant weaknesses in these six control areas, as in past years’ analyses, weaknesses were most often identified for security program management and access controls. For security program management, we identified weaknesses for all 24 agencies in 2002—the same as reported for 2001, and compared to 21 of the 24 agencies (88 percent) in 2000. Security program management, which is fundamental to the appropriate selection and effectiveness of the other categories of controls, covers a range of activities related to understanding information security risks; selecting and implementing controls commensurate with risk; and ensuring that controls, once implemented, continue to operate effectively. For access controls, we found weaknesses for 22 of 24 agencies (92 percent) in 2002 (no significant weaknesses were found for one agency, and access controls were not reviewed for another). This compares to access control weaknesses found in all 24 agencies for both 2000 and 2000. Weak access controls for sensitive data and systems make it possible for an individual or group to inappropriately modify, destroy, or disclose sensitive data or computer programs for purposes such as personal gain or sabotage. In today’s increasingly interconnected computing environment, poor access controls can expose an agency’s information and operations to attacks from remote locations all over the world by individuals with only minimal computer and telecommunications resources and expertise. Our analyses also showed service-continuity-related weaknesses at 20 of the 24 agencies (83 percent) with no significant weaknesses found for 3 agencies (service continuity controls were not reviewed for another). This compares to 19 agencies with service continuity weaknesses found in 2001 and 20 agencies found in 2000. Service continuity controls are important in that they help ensure that when unexpected events occur, critical operations will continue without undue interruption and that crucial, sensitive data are protected. If service continuity controls are inadequate, an agency can lose the capability to process, retrieve, and protect electronically maintained information, which can significantly affect an agency’s ability to accomplish its mission. Further, such controls are particularly important in the wake of the terrorist attacks of September 11, 2001. These analyses of information security at federal agencies also showed that the scope of audit work performed has continued to expand to more fully cover all six major areas of general controls at each agency. Not surprisingly, this has led to the identification of additional areas of weakness at some agencies. These increases in reported weaknesses do not necessarily mean that information security at federal agencies is getting worse. They more likely indicate that information security weaknesses are becoming more fully understood—an important step toward addressing the overall problem. Nevertheless, the results leave no doubt that serious, pervasive weaknesses persist. As auditors increase their proficiency and the body of audit evidence expands, it is probable that additional significant deficiencies will be identified. Most of the audits represented in figure 2 were performed as part of financial statement audits. At some agencies with primarily financial missions, such as the Department of the Treasury and the Social Security Administration, these audits covered the bulk of mission-related operations. However, at agencies whose missions are primarily nonfinancial, such as DOD and the Department of Justice, the audits may provide a less complete picture of the agency’s overall security posture because the audit objectives focused on the financial statements and did not include evaluations of individual systems supporting nonfinancial operations. However, in response to congressional interest, beginning in fiscal year 1999, we expanded our audit focus to cover a wider range of nonfinancial operations—a trend we expect to continue. Audit coverage for nonfinancial systems has also increased as agencies and their IGs reviewed and evaluated their information security programs as required by GISRA. To fully understand the significance of the weaknesses we identified, it is necessary to link them to the risks they present to federal operations and assets. Virtually all federal operations are supported by automated systems and electronic data, and agencies would find it difficult, if not impossible, to carry out their missions and account for their resources without these information assets. Hence, the degree of risk caused by security weaknesses is extremely high. The weaknesses identified place a broad array of federal operations and assets at risk. For example, resources, such as federal payments and collections, could be lost or stolen; computer resources could be used for unauthorized purposes or to launch sensitive information, such as taxpayer data, social security records, medical records, and proprietary business information, could be inappropriately disclosed, browsed, or copied for purposes of espionage or other types of crime; critical operations, such as those supporting national defense and emergency services, could be disrupted; data could be modified or destroyed for purposes of fraud or disruption; agency missions could be undermined by embarrassing incidents that result in diminished confidence in their ability to conduct operations and fulfill their fiduciary responsibilities. Concerned with accounts of attacks on commercial systems via the Internet and reports of significant weaknesses in federal computer systems that make them vulnerable to attack, on October 30, 2000, Congress enacted GISRA, which became effective November 29, 2000, for a period of 2 years. GISRA supplemented information security requirements established in the Computer Security Act of 1987, the Paperwork Reduction Act of 1995, and the Clinger-Cohen Act of 1996 and was consistent with existing information security guidance issued by the Office of Management and Budget (OMB) and the National Institute of Standards and Technology (NIST), as well as audit and best practice guidance issued by GAO. Most importantly, however, GISRA consolidated these separate requirements and guidance into an overall framework for managing information security and established new annual review, independent evaluation, and reporting requirements to help ensure agency implementation and both OMB and congressional oversight. GISRA assigned specific responsibilities to OMB, agency heads and chief information officers (CIOs), and IGs. OMB was responsible for establishing and overseeing policies, standards, and guidelines for information security. This included the authority to approve agency information security programs, but delegated OMB’s responsibilities regarding national security systems to national security agencies. OMB was also required to submit an annual report to the Congress summarizing results of agencies’ evaluations of their information security programs. GISRA does not specify a date for this report, and OMB released its fiscal year 2001 report in February 2002. It has not yet released its fiscal year 2002 report. GISRA required each agency, including national security agencies, to establish an agencywide risk-based information security program to be overseen by the agency CIO and ensure that information security is practiced throughout the life cycle of each agency system. Specifically, this program was to include periodic risk assessments that consider internal and external threats to the integrity, confidentiality, and availability of systems, and to data supporting critical operations and assets; the development and implementation of risk-based, cost-effective policies and procedures to provide security protections for information collected or maintained by or for the agency; training on security responsibilities for information security personnel and on security awareness for agency personnel; periodic management testing and evaluation of the effectiveness of policies, procedures, controls, and techniques; a process for identifying and remediating any significant deficiencies; procedures for detecting, reporting, and responding to security incidents; an annual program review by agency program officials. In addition to the responsibilities listed above, GISRA required each agency to have an annual independent evaluation of its information security program and practices, including control testing and compliance assessment. The evaluations of non-national-security systems were to be performed by the agency IG or an independent evaluator, and the results of these evaluations were to be reported to OMB. For the evaluation of national security systems, special provisions included having national security agencies designate evaluators, restricting the reporting of evaluation results, and having the IG or an independent evaluator perform an audit of the independent evaluation. For national security systems, only the results of each audit of an evaluation are to be reported to OMB. With GISRA expiring on November 29, 2002, on December 17, 2002, FISMA was enacted as title III of the E-Government Act of 2002. This act permanently authorizes and strengthens the information security program, evaluation, and reporting requirements established by GISRA. In addition, among other things, FISMA requires NIST to develop, for systems other than national security systems, (1) standards to be used by all agencies to categorize all of their information and information systems based on the objectives of providing appropriate levels of information security according to a range of risk levels; (2) guidelines recommending the types of information and information systems to be included in each category; and (3) minimum information security requirements for information and information systems in each category. In addition, FISMA requires each agency to develop, maintain, and annually update an inventory of major information systems (including major national security systems) operated by the agency or under its control. This inventory is also to include an identification of the interfaces between each system and all other systems or networks, including those not operated by or under the control of the agency. In our March 2002 testimony, we reported that the initial implementation of GISRA was a significant step in improving federal agencies’ information security programs and addressing their serious, pervasive information security weaknesses. Agencies also noted benefits of this first-year implementation, including increased management attention to and accountability for information security, and the administration undertook other important actions to address information security, such as integrating information security into the President’s Management Agenda Scorecard. However, along with these benefits, agencies’ reviews of their information security programs showed that agencies had not established information security programs consistent with the legislative requirements and that significant weaknesses existed. We also noted that although agency actions were under way to strengthen information security and implement these requirements, significant improvement would require sustained management attention and OMB and congressional oversight. Our analysis of second-year or fiscal year 2002 implementation of GISRA showed progress in several areas, including the types of information being reported and made available for oversight, governmentwide efforts to improve information security, and agencies’ implementation of information security requirements. Despite this progress, our analyses of agency and IG reports showed that the 24 agencies have not yet established information security programs consistent with legislative requirements and that corrective action plans did not always include all identified weaknesses and need independent validation to ensure that weaknesses are corrected. For fiscal year 2002 GISRA reporting, OMB provided the agencies with updated reporting instructions and guidance on preparing and submitting plans of action and milestones (corrective action plans). Like instructions for fiscal year 2001, this updated guidance listed specific topics that the agencies were to address, many of which were referenced back to corresponding requirements of GISRA. However, in response to agency requests and recommendations we made to OMB as a result of our review of fiscal year 2001 GISRA implementation, this guidance also incorporated several significant changes to help improve the consistency and quality of information being reported for oversight by OMB and the Congress. These changes included the following: Reporting instructions provided new high-level management performance measures that the agencies and IGs were required to use to report on agency officials’ performance. According to OMB, most agencies did not provide performance measures or actual levels of performance where asked to do so for fiscal year 2001 reporting, and the agencies requested that OMB develop such measures. These required performance measures include, for example, the number and percentage of systems that have been assessed for risk, the number of contractor operations or facilities that were reviewed, and the number of employees with significant security responsibilities that received specialized training. Instructions confirmed that agencies were expected to review all systems annually. OMB explained that GISRA requires senior agency program officials to review each security program for effectiveness at least annually, and that the purpose of the security programs discussed in GISRA is to ensure the protection of the systems and data covered by the program. Thus, a review of each system is essential to determine the program’s effectiveness, and only the depth and breadth of such system reviews are flexible. Agencies were generally required to use all elements of NIST Special Publication 800-26, Security Self-Assessment Guide for Information Technology Systems, to review their systems. This guide accompanies NIST’s Security Assessment Framework methodology, which agency officials can use to determine the current status of their security programs. The guide itself uses an extensive questionnaire containing specific control objectives and techniques against which an unclassified system or group of interconnected systems can be tested and measured. For the fiscal year 2001 reporting period, OMB encouraged agencies to use this guide, but did not require its use because it was not completed until well into the reporting period. NIST finalized the guide in November 2001, and for fiscal year 2002 reporting, OMB required its use unless an agency and its IG confirmed that any agency-developed methodology captured all elements of the guide. To automate the completion of the questionnaire, NIST also developed a tool that can be found at its Computer Security Resource Center Web site: http://csrc.nist.gov/asset/. OMB requested IGs to verify that agency corrective action plans identify all known security weaknesses within an agency, including components, and are used by the IG and the agency, major components, and program officials within them, as the authoritative agency management mechanism to prioritize, track, and manage all agency efforts to close security performance gaps. OMB authorized agencies to release certain information from their corrective action plans to assist the Congress in its oversight responsibilities. Agencies could release this information, as requested, excluding certain elements, such as estimated funding resources and the scheduled completion dates for resolving a weakness. OMB’s report to the Congress on fiscal year 2001 GISRA implementation provided an overview of OMB and agencies’ implementation efforts, summarized the overall results of OMB’s analyses, and included individual agency summaries for the 24 of the largest federal departments and agencies. Overall, OMB reported that although examples of good security exist in many agencies, and others were working very hard to improve their performance, many agencies had significant deficiencies in every important area of security. In particular, the report highlighted six common security weaknesses. These weaknesses are listed below along with an update of the activities under way to address them. 1. Lack of senior management attention to information security—Last year, OMB reported that, to address this issue, it was working through the President’s Management Council and the Critical Infrastructure Protection Board to promote sustained attention to security as part of its work on the President’s Management Agenda and the integration of security into the Scorecard. OMB also reported that it included security instructions in budget passback guidance and sent security letters to each agency highlighting the lack of senior management attention and describing specific actions OMB is taking to assist the agency. According to OMB officials, although the President’s Critical Infrastructure Protection Board was recently dissolved, OMB continues to coordinate security issues with the President’s Homeland Security Council and the Department of Homeland Security. These officials also said that they are continuing to work with the agencies and that security is an integral part of assessing agencies’ performance for the E-Government component of the Scorecard. 2. Inadequate accountability for job and program performance related to IT security—OMB reported that it was working with the agencies and other entities to develop workable measures of job and program performance to hold federal employees accountable for their security responsibilities. As discussed previously, OMB instructions to federal agencies for fiscal year 2002 GISRA reporting included high-level management performance measures. Related to this initiative, in October 2002, NIST also issued an initial public draft of a security metrics guide for IT systems to provide guidance on how an organization, through the use of metrics, can determine the adequacy of in-place security controls, policies, and procedures. The draft also explains the metric development and implementation process and how it can also be used to adequately justify security control investments. 3. Limited security training for general users, IT professionals, and security professionals—OMB reported that along with federal agencies, it was working through the Critical Infrastructure Protection Board’s education committee and the CIO Council’s Workforce Committee to address this issue. OMB also reported that work was under way to identify and disseminate security training best practices through NIST’s Federal Agency Security Practices Web site and that one of the administration’s electronic government initiatives is to establish and deliver electronic-training on a number of mandatory topics, including security, for use by all federal agencies, along with state and local governments. As an example of progress on this initiative, OMB officials pointed to an online training initiative, www.golearn.gov. Launched in July 2002 by the Office of Personnel Management (OPM), this site offers training in an online environment, including IT security courses, such as security awareness, fundamentals of Internet security, and managing network security. Other activities for this area include NIST’s July 2002 issuance of draft guidance on designing, developing, implementing, and maintaining an awareness and training program within an agency’s IT security program. 4. Inadequate integration of security into the capital planning and investment control process—OMB reported that it was integrating security into the capital planning and investment control process to ensure that adequate security is incorporated directly into and funded over the life cycle of all systems and programs before funding is approved. Specifically, OMB established criteria that agencies must report security costs for each major and significant IT investment, document in their business cases that adequate security controls have been incorporated into the life cycle planning and funding of each IT investment, and tie their corrective action plans for a system directly to the business case for that IT investment. Another criterion was that agency security reports and corrective action plans were presumed to reflect the agency’s security priorities and, thus, would be a central tool for OMB in prioritizing funding for systems. OMB officials confirmed that these activities were continuing and included providing additional guidance in OMB Circular A-11 on identifying security costs. In addition, they said that draft NIST guidelines for federal IT systems would help to ensure that agencies consider security throughout the system life cycle. Under OMB policy, responsible federal officials are required to make a security determination (called accreditation) to authorize placing IT systems into operation. In order for these officials to make sound, risk-based decisions, a security evaluation (known as certification) of the IT system is needed. The NIST guidelines are to establish a standard process, general tasks and specific subtasks to certify and accredit systems and provide a new approach that uses the standardized process to verify the correctness and effectiveness of security controls employed in a system. The guidelines will also employ the use of standardized, minimum security controls and standardized verification techniques and procedures that NIST indicates will be provided in future guidance. 5. Poor security for contractor-provided services—OMB reported last year that under the guidance of the OMB-led security committee established by Executive Order 13231 (since eliminated), an issue group would develop recommendations to include addressing how security is handled in contracts. OMB also reported that it would work with the CIO Council and the Procurement Executives Council to establish a training program that ensures appropriate contractor training in security. OMB officials stated that these activities are continuing and the issue group had made recommendations to the Federal Acquisition Regulation Council. In addition, in October 2002, NIST issued a draft guide on security considerations in federal IT procurements, which includes specifications, clauses, and tasks for areas such as IT security training and awareness, personnel security, physical security, and security features in systems. 6. Limited capability to detect, report, and share information on vulnerabilities or to detect intrusions, suspected intrusions, or virus infections—OMB reported that the Federal Computer Incident Response Center (FedCIRC) reports to it on a quarterly basis on the federal government’s status on IT security incidents. OMB also reported that under OMB and Critical Infrastructure Protection Board guidance, GSA was exploring methods to disseminate patches to all agencies more effectively. OMB officials pointed to the Patch Authentication and Dissemination Capability Program, which FedCIRC introduced in January 2003 as a free service to federal civilian agencies. According to FedCIRC, this service provides a trusted source of validated patches and notifications on new threats and vulnerabilities that have potential to disrupt federal government mission critical systems and networks. It is a Web-enabled service that obtains patches from vendors, validates that the patch only does what it states that it was created to correct, and provides agencies notifications based on established profiles. We also noted that in August 2002, NIST published procedures for handling security patches that provided principles and methodologies for establishing an explicit and documented patching and vulnerability policy and a systematic, accountable, and documented process for handling patches. In addition to activities identified for these specific weaknesses, in last year’s report, OMB reported that it would direct all large agencies to undertake a Project Matrix review to more clearly identify and prioritize the security needs for government assets. Project Matrix is a methodology developed by the Critical Infrastructure Assurance Office (CIAO) (recently transferred to the Department of Homeland Security) that identifies the critical assets within an agency, prioritizes them, and then identifies interrelationships with other agencies or the private sector. OMB reported that once reviews have been completed at each large agency, it would identify cross-government activities and lines of business for Project Matrix reviews so that it will have identified both vertically and horizontally the critical operations and assets of the federal government’s critical enterprise architecture and their relationship beyond government. As of July 2002, a CIAO official reported that of 31 agencies targeted for Project Matrix reviews, 18 had begun their reviews; and of those, 5 had completed the first step of the methodology to identify their critical assets, 2 found no candidate assets to undergo a process to identify critical assets, 5 had begun the second step to identify other federal government assets, systems, and networks upon which their critical assets depend to operate, and none had begun the third step to identify all associated dependencies on private-sector owned and operated critical infrastructures. According to a CIAO official in December 2003, the office’s goal was to complete Project Matrix reviews for 24 of the 31 identified agencies by the end of fiscal year 2004 and for the remaining 7 in fiscal year 2005. However, this official also said that at the request of the Office of Homeland Security, CIAO was revising and streamlining its Project Matrix methodology to be less labor intensive for the agencies and reduce the time needed to identify critical assets. In our recent discussions with OMB officials, they said they were requiring Project Matrix reviews for 24 large departments and agencies and that as part of their GISRA reporting, agencies were required to report on the status of their efforts to identify critical assets and their dependencies. However, they acknowledged that OMB did not establish any deadlines for the completion of Project Matrix reviews. In our February 2003 report, we also reported that neither the administration nor the agencies we reviewed had milestones for the completion of Project Matrix analyses and recommended that agencies coordinate with CIAO to set these milestones. Finally, in February 2002, OMB reported that a number of efforts were under way to address security weaknesses in industry software development, and that chief among them were national policy-level activities of the Critical Infrastructure Protection Board (since eliminated). At the technical product-level, OMB reported that the National Information Assurance Partnership, operated jointly by NIST and the National Security Agency, was certifying private-sector laboratories to which product vendors may submit their software for analysis and certification, but that this certification process was a lengthy one and often cannot accommodate the “time-to-market” imperative that the technology industry faces. According to recent discussions with OMB officials, the National Information Assurance Partnership efforts are still under way. Fiscal year 2002 GISRA reporting by CIOs and independent evaluations by IGs for the 24 agencies provided an improved baseline for measuring improvements in federal information security not only because of performance measures that OMB now requires, but also because of agencies’ increased review coverage and use of consistent methodologies. For example, 16 agencies reported that they had reviewed the security of 60 percent or more of their systems and programs for their fiscal year 2002 GISRA reporting, with 10 of these reporting that they reviewed from 90 to 100 percent. Further, 13 agencies reported that coverage of agency systems and programs increased for fiscal year 2002 compared to fiscal year 2001. However, with 8 agencies reporting that they reviewed less than half of their systems, improvements are still needed. Regarding their methodologies, 21 agencies reported that, as required by OMB, they used NIST’s Security Self-Assessment Guide for Information Technology Systems or developed their own methodology that addressed all elements of the guide, and only 3 agencies reported that they did not. By not following the NIST guide, agencies may not identify all weaknesses. For example, one agency reported that the methodology it used incorporated many of the elements of NIST’s self-assessment guide, but the IG reported that the methodology did not call for the detailed level of system reviews required by the NIST guide nor did it include the requirement to test and evaluate security controls. In performing our analyses, we summarized and categorized the reported information including data provided for the OMB-prescribed performance measures. There were several instances where agency reports either did not address or provide sufficient data for a question or measure. In addition, IGs’ independent evaluations sometimes showed different results than CIO reporting or identified data inaccuracies. Further, IG reporting also did not always include comparable data, particularly for the performance measures. In part, this was because although OMB instructions said that the IGs should use the performance measures to assist in evaluating agency officials’ performance, the IG was not required to review the agency’s reported measures. Summaries of our analyses for key requirements follow below. GISRA required agencies to perform periodic threat-based risk assessments for systems and data. Risk assessments are an essential element of risk management and overall security program management and, as our best practice work has shown, are an integral part of the management processes of leading organizations. Risk assessments help ensure that the greatest risks have been identified and addressed, increase the understanding of risk, and provide support for needed controls. Our reviews of federal agencies, however, frequently show deficiencies related to assessing risk, such as security plans for major systems that are not developed on the basis of risks. As a result, the agencies had accepted an unknown level of risk by default rather than consciously deciding what level of risk was tolerable. As one of its performance measures for this requirement, OMB required agencies to report the number and percentage of their systems that have been assessed for risk during fiscal year 2001 and fiscal year 2002. Our analyses of reporting for this measure showed some overall progress. For example, of the 24 agencies we reviewed, 13 reported an increase in the percentage of systems assessed for fiscal year 2002 compared to fiscal year 2001. In addition, as illustrated in figure 3 below, for fiscal year 2002, 11 agencies reported that they had assessed risk for 90 to 100 percent of their systems. However, it also shows that further efforts are needed by other agencies, including the 9 that reported less than 60 percent of their systems had been assessed for risk. GISRA also required the agency head to ensure that the agency’s information security plan is practiced throughout the life cycle of each agency system. In its reporting instructions, OMB required agencies to report whether the agency head had taken specific and direct actions to oversee that program officials and the CIO are ensuring that security plans are up to date and practiced throughout the life cycle. They also had to report the number and percentage of systems that have an up-to-date security plan. Our analyses showed that although most agencies reported that they had taken such actions, IG reports disagreed for a number of agencies, and many systems do not have up-to-date security plans. Specifically, 21 agencies reported that the agency head had taken actions to oversee that security plans are up to date and practiced throughout the life cycle compared to the IGs reporting that only 9 agencies had taken such actions. One IG reported that the agency’s security plan guidance predates revisions to NIST and OMB guidance and, as a result, does not contain key elements, such as the risk assessment methodology used to identify threats and vulnerabilities. In addition, another IG reported that although progress had been made, security plans had not been completed for 62 percent of the agency’s systems. Regarding the status of agencies’ security plans, as shown in figure 4, half of the 24 agencies reported that they had up-to-date security plans for 60 percent or more of their systems for fiscal year 2002, including 7 that reported these plans for 90 percent or more. GISRA required agencies to provide training on security awareness for agency personnel and on security responsibilities for information security personnel. Our studies of best practices at leading organizations have shown that they took steps to ensure that personnel involved in various aspects of their information security programs had the skills and knowledge they needed. They also recognized that staff expertise had to be frequently updated to keep abreast of ongoing changes in threats, vulnerabilities, software, security techniques, and security monitoring tools. However, our past information security reviews at individual agencies have shown that they have not provided adequate computer security training to their employees, including contractor staff. Among the performance measures for these requirements, OMB required agencies to report the number and percentage of employees including contractors that received security training during fiscal years 2001 and 2002 and the number of employees with significant security responsibilities that received specialized training. For agency employee/contractor security training, our analyses showed 16 agencies reported that they provided security training to 60 percent or more of their employees and contractors for fiscal year 2002, with 9 reporting 90 percent or more. Of the remaining 8 agencies, 4 reported that such training was provided for less than half of their employees/contractors, 1 reported that none were provided this training, and 3 provided insufficient data for this measure. For specialized training for employees with significant security responsibilities, some progress was indicated, but additional training is needed. As indicated in figure 5, our analyses showed 11 agencies reported that 60 percent or more of their employees with significant security responsibilities had received specialized training for fiscal year 2002, with 5 reporting 90 percent or more. Of the remaining 13 agencies, 4 reported less than 30 percent and one reported that none had received such training. Under GISRA, the agency head was responsible for ensuring that the appropriate agency officials, evaluated the effectiveness of the information security program, including testing controls. The act also required that the agencywide information security program include periodic management testing and evaluation of the effectiveness of information security policies and procedures. Periodically evaluating the effectiveness of security policies and controls and acting to address any identified weaknesses are fundamental activities that allow an organization to manage its information security risks cost effectively, rather than reacting to individual problems ad hoc only after a violation has been detected or an audit finding has been reported. Further, management control testing and evaluation as part of the program reviews can supplement control testing and evaluation in IG and GAO audits to help provide a more complete picture of the agencies’ security postures. As a performance measure for this requirement, OMB required the agencies to report the number and percentage of systems for which security controls have been tested and evaluated during fiscal years 2001 and 2002. Our analyses of the data agencies reported for this measure showed that although 15 agencies reported an increase in the overall percentage of systems being tested and evaluated for fiscal year 2002, most agencies are not testing essentially all of their systems. As shown in figure 6, our analyses showed that 14 agencies reported that they had tested the controls of less than 60 percent of their systems for fiscal year 2002. Of the remaining 10 agencies, 4 reported that they had tested and evaluated controls for 90 percent or more of their systems. As another measure, OMB also required agencies to report the number and percentage of systems that have been authorized for processing following certification and accreditation. According to NIST’s draft Guidelines for the Security Certification and Accreditation (C&A) of Federal Information Technology Systems (Special Publication 800-37), accreditation is the authorization of an IT system to process, store, or transmit information, granted by a management official that provides a form of quality control and challenges managers and technical staff to find the best fit for security, given technical constraints, operational constraints, and mission requirements. Certification is the comprehensive evaluation of the technical and non-technical security controls of an IT system to support the accreditation process that establishes the extent to which a particular design and implementation meets a set of specified security requirements. Certification provides the necessary information to a management official to formally declare that an IT system is approved to operate at an acceptable level of risk. The accreditation decision is based on the implementation of an agreed upon set of management, operational, and technical controls, and by accrediting the system, the management office accepts the risk associated with it. Our analysis of agencies’ reports showed mixed progress for this measure. For example, 10 agencies reported increases in the percentage of systems authorized for processing following certification and accreditation compared to fiscal year 2001, but 8 reported decreases and 3 did not change (3 others did not provide sufficient data). In addition, as shown in figure 7, 8 agencies reported that for fiscal year 2002, 60 percent or more of their systems had been authorized for processing following certification and accreditation with only 3 of these reporting from 90 to 100 percent. And of the remaining 16 agencies reporting less than 60 percent, 3 reported that none of their systems had been authorized. In addition to this mixed progress, IG reports identified instances where agencies’ certification and accreditation efforts were inadequate. For example, one agency reported that 43 percent of its systems were authorized for processing following certification and accreditation. IG reporting agreed, but also noted that over a fourth of the systems identified as authorized had been operating with an interim authorization and did not meet all of the security requirements to be granted accreditation. The IG also stated that, due to the risk posed by systems operating without certification and full accreditation, the department should consider identifying this deficiency as a material weakness. GISRA required agencies to implement procedures for detecting, reporting, and responding to security incidents. Although even strong controls may not block all intrusions and misuse, organizations can reduce the risks associated with such events if they promptly take steps to detect intrusions and misuse before significant damage can be done. In addition, accounting for and analyzing security problems and incidents are effective ways for an organization to gain a better understanding of threats to its information and of the cost of its security-related problems. Such analyses can also pinpoint vulnerabilities that need to be addressed to help ensure that they will not be exploited again. In this regard, problem and incident reports can provide valuable input for risk assessments, help in prioritizing security improvement efforts, and be used to illustrate risks and related trends in reports to senior management. Our information security reviews also confirm that federal agencies have not adequately (1) prevented intrusions before they occur, (2) detected intrusions as they occur, (3) responded to successful intrusions, or (4) reported intrusions to staff and management. Such weaknesses provide little assurance that unauthorized attempts to access sensitive information will be identified and appropriate actions taken in time to prevent or minimize damage. OMB included a number of performance measures in agency reporting instructions that were related to detecting, reporting, and responding to security incidents. These included the number of agency components with an incident-handling and response capability, whether the agency and its major components share incident information with FedCIRC in a timely manner, and the numbers of incidents reported. OMB also required that agencies report on how they confirmed that patches have been tested and installed in a timely manner. Our analyses of agencies’ reports showed that although most agencies reported that they have established incident response capabilities, implementation of these capabilities is still not complete. For example, 12 agencies reported that for fiscal year 2002, 90 percent or more of their components had incident handling and response capabilities and 8 others reported that they provided these capabilities to components through a central point within the agency. However, although most agencies report having these capabilities for most components, in at least two instances, the IGs’ evaluations identified instances where incident response capabilities were not always implemented. For example, one IG reported that the department established and implemented its computer security incident-response capability on August 1, 2002, but had not enforced procedures to ensure that components comply with a consistent methodology to identify, document, and report computer security incidents. Another IG reported that the agency had released incident-handling procedures and established a computer incident response team, but had not formally assigned members to the team or effectively communicated procedures to employees. Our analyses also showed that for fiscal year 2002, 13 agencies reported they had oversight procedures to verify that patches have been tested and installed in a timely manner and 10 reported they did not. Of those that did not have procedures, several specifically mentioned that they planned to participate in FedCIRC’s patch management process. GISRA required that each agencywide information security program ensure the integrity, confidentiality, and availability of systems and data supporting the agency’s critical operations and assets. In addition, as mentioned previously, OMB directed 24 of the largest agencies to undergo a Project Matrix review to identify and characterize the operations and assets and these assets’ associated infrastructure dependencies and interdependencies that are most critical to the nation. For example, as part of its GISRA reporting, OMB required the agencies to report whether they had undergone a Project Matrix review or used another methodology to identify their critical assets and their interdependencies and interrelationships. Our analyses of agencies’ reports showed some overall process in identifying critical assets, but limited progress in identifying interdependencies. As shown in figure 8, a total of 14 agencies reported they had identified their critical assets and operations—10 using Project Matrix and 4 using other methodologies. In addition, five more agencies reported that they were in some stage of identifying their critical assets and operations, and three more planned to do so in fiscal year 2003. Our analyses also showed that three agencies reported they had identified the interdependencies for their critical assets, and four others reported that they were in some stage of undertaking this process. Contingency plans provide specific instructions for restoring critical systems, including such things as arrangements for alternative processing facilities in case the usual facilities are significantly damaged or cannot be accessed. At many of the agencies we have reviewed, we found incomplete plans and procedures to ensure that critical operations can continue when unexpected events occur, such as a temporary power failure, accidental loss of files, or a major disaster. These plans and procedures were incomplete because operations and supporting resources had not been fully analyzed to determine which were most critical and would need to be restored first. Further, existing plans were not fully tested to identify their weaknesses. As a result, many agencies have inadequate assurance that they can recover operational capability in a timely, orderly manner after a disruptive attack. As another of its performance measures, OMB required agencies to report the number and percentage of systems for which contingency plans have been tested in the past year. As shown in figure 9, our analyses showed that for fiscal year 2002, only 2 agencies reported they had tested contingency plans for 90 percent or more of their systems, while 20 had tested contingency plans for less than 60 percent of their systems. One reported that none had been tested. GISRA requires agencies to develop and implement risk-based, cost- effective policies and procedures to provide security protection for information collected or maintained either by the agency or for it by another agency or contractor. In its fiscal year 2001 GISRA report to the Congress, OMB identified poor security for contractor-provided services as a common weakness and for fiscal year 2002 reporting, included performance measures to help indicate whether the agency program officials and CIO used appropriate methods, such as audits and inspections, to ensure that service provided by a contractor are adequately secure and meet security requirements. Our analyses showed that a number of agencies reported that they have reviewed a large percentage of services provided by a contractor, but others have reviewed only a small number. For operations and assets under the control of agency program officials, 16 agencies reported that for fiscal year 2002 they reviewed 60 percent or more of contractor operations or facilities, with 7 of these reporting that they reviewed 90 percent or more; and 4 reported that they reviewed less than 30 percent. For operations and assets under the control of the CIO, 11 agencies reported that for fiscal year 2002 they reviewed 60 percent or more of contractor operations or facilities, with 7 of these reporting they reviewed 90 percent or more; 3 reported that they reviewed less than 30 percent; and 5 agencies reported that they had no services provided by a contractor or another agency. GISRA requires that each agency examine the adequacy and effectiveness of information security policies, procedures, and practices in plans and reports related to annual agency budgets and other statutory performance reporting requirements. The act also requires each agency to describe the resources, including budget, staffing, and training, that are necessary to implement its agencywide information security program. For GISRA reporting, OMB required agencies to report information on total security funding included in their fiscal year 2002 budget request, fiscal year 2002 budget enacted, and the President’s fiscal year 2003 budget and to include (1) a breakdown of security costs by each major operating division or bureau and (2) CIP costs that apply to the protection of government operations and assets. Most agencies (21) reported total security funding for these budgets, although 13 did not show costs by major operating division or bureau and/or for CIP. Further, most agencies reported including security costs in their budget requests and justifications. For example: For the fiscal year 2003 budget, 16 agencies reported that they had submitted capital asset plans and justifications to OMB with all requisite security information, and of the remaining 8 agencies, 5 reported that less than 30 percent of their capital asset plans and justifications did not include this information. Last year, 19 agencies reported that they had not included security requirements and costs on every fiscal year 2002 capital asset plan submitted to OMB. For fiscal year 2003, 18 agencies reported that security costs were reported on the Exhibit 53 for all agency systems, with 5 reporting that these costs were not reported for all agency systems. GISRA required that agencies develop a process for ensuring that remedial action is taken to address significant deficiencies. As a result, OMB required the agency head to work with the CIO and program officials to provide a strategy to correct security weaknesses identified through annual GISRA program reviews and independent evaluations, as well as other reviews or audits performed throughout the reporting period by the IG or GAO. Agencies were required to submit a corrective action plan for all programs and systems where a security weakness had been identified plus quarterly updates on the plan’s implementation. OMB guidance required that these plans list the identified weaknesses and for each identify a point of contact, the resources required to resolve the weakness, the scheduled completion date, key milestones with completion dates for the milestones, milestone changes, the source of the weakness (such as a program review, IG audit, or GAO audit), and the status (ongoing or completed). Agencies were also required to submit quarterly updates of these plans that list the total number of weaknesses identified at the program and system level, as well as the numbers of weaknesses for which corrective actions were completed on time, ongoing and on schedule, or delayed. Updates were also to include the number of new weaknesses discovered subsequent to the last corrective action plan or quarterly update. Our analyses of agencies’ fiscal year 2002 corrective action plans and IGs’ evaluations of these plans showed that most agencies followed the OMB- prescribed format, but also that several used an existing tracking system to meet this requirement. In theory, these plans could prove to be a useful tool for the agencies in correcting their information security weaknesses. However, their usefulness could be impaired to the extent that they do not identify all weaknesses or provide realistic completion estimates. For example, for the 24 agencies, only 5 IGs reported that their agency’s corrective action plan addressed all identified significant weaknesses and 9 specifically reported that their agency’s plan did not. Our analyses also showed that in several instances, corrective action plans did not indicate the current status of weaknesses identified or include information regarding whether actions were on track as originally scheduled. Plan progress must be appropriately monitored and the actual correction of weaknesses may require independent validation. Our analyses showed that three IGs reported that their agencies did not have a centralized tracking system to monitor the status of corrective actions. Also, one IG specifically questioned the accuracy of unverified, self-reported corrective actions reported in the agency’s plan. Recent audits and reviews, including annual GISRA program reviews and independent evaluations, show that although agencies have made progress in addressing GAO and IG recommendations to improve the effectiveness of their information security, further action is needed. In particular, overall security program management continues to be an area marked by widespread and fundamental problems. Many agencies have not developed security plans for major systems based on risk, have not documented security policies, and have not implemented a program for testing and evaluating the effectiveness of the controls they rely on. As a result, they could not ensure that the controls they had implemented were operating as intended and they could not make informed judgments as to whether they were spending too little or too much of their resources on security. Further information security improvement efforts are also needed at the governmentwide level, and it is important that these efforts are guided by a comprehensive strategy and, as development of this strategy continues, that certain key issues be addressed. These issues and actions currently under way are as follows. First, the federal strategy should delineate the roles and responsibilities of the numerous entities involved in federal information security and describe how the activities of these organizations interrelate, who should be held accountable for their success or failure, and whether these activities will effectively and efficiently support national goals. Second, more specific guidance to agencies on the controls that they need to implement could help ensure adequate protection. Currently, agencies have wide discretion in deciding which computer security controls to implement and the level of rigor with which to enforce these controls. In essence, one set of specific controls will not be appropriate for all types of systems and data. Nevertheless, our studies of best practices at leading organizations have shown that more specific guidance is important. In particular, specific mandatory standards for varying risk levels can clarify expectations for information protection, including audit criteria; provide a standard framework for assessing information security risk; help ensure that shared data are appropriately protected; and reduce demands for limited resources to independently develop security controls. FISMA requires NIST to develop standards that provide mandatory minimum information security requirements. Third, ensuring effective implementation of agency information security and CIP plans will require active monitoring by the agencies to determine whether milestones are being met and testing is being performed to determine whether policies and controls are operating as intended. With routine periodic evaluations, such as those required by GISRA and now FISMA, performance measurements can be more meaningful. In addition, the annual evaluation, reporting, and monitoring process established through these provisions is an important mechanism, previously missing, to hold agencies accountable for implementing effective security and to manage the problem from a governmentwide perspective. Fourth, the Congress and the executive branch can use audit results, including the results of GISRA and FISMA reporting, to monitor agency performance and take whatever action is deemed advisable to remedy identified problems. Such oversight is essential for holding agencies accountable for their performance, as was demonstrated by OMB and congressional efforts to oversee the Year 2000 computer challenge. Fifth, agencies must have the technical expertise they need to select, implement, and maintain controls that protect their information systems. Similarly, the federal government must maximize the value of its technical staff by sharing expertise and information. As highlighted during the Year 2000 challenge, the availability of adequate technical and audit expertise is a continuing concern to agencies. Sixth, agencies can allocate resources sufficient to support their information security and infrastructure protection activities. In our review of first-year GISRA implementation, we reported that many agencies emphasized the need for adequate funding to implement security requirements, and that security funding varied widely across the agencies. Funding for security is already embedded to some extent in agency budgets for computer system development efforts and routine network and system management and maintenance. However, additional amounts are likely to be needed to address specific weaknesses and new tasks. At the same time, OMB and congressional oversight of future spending on information security will be important for ensuring that agencies are not using the funds they receive to continue ad hoc, piecemeal security fixes that are not supported by a strong agency risk-management process. Further, we agree with OMB that much can be done to cost-effectively address common weaknesses, such as limited security training, across government rather than individually by agency. Seventh, expanded research is needed in the area of information systems protection. Although a number of research efforts are under way, experts have noted that more is needed to achieve significant advances. In this regard, the Congress recently passed and the President signed into law the Cyber Security Research and Development Act to provide $903 million over 5 years for cybersecurity research and education programs. This law directs the National Science Foundation to create new cybersecurity research centers, program grants, and fellowships. It also directs NIST to create new program grants for partnerships between academia and industry. CIP involves activities that enhance the security of our nation’s cyber and physical public and private infrastructure that are critical to national security, national economic security, and/or national public health and safety. Federal awareness of the importance of securing our nation’s critical infrastructures has continued to evolve since the mid-1990s. Over the years, a variety of working groups has been formed, special reports written, federal policies issued, and organizations created to address the issues that have been raised. The following sections summarize key developments in federal CIP policy to provide historical perspective. In October 1997, the President’s Commission on Critical Infrastructure Protection issued a report describing the potentially devastating implications of poor information security for the nation. The report recommended measures to achieve a higher level of CIP that included industry cooperation and information sharing, a national organization structure, a revised program of research and development, a broad program of awareness and education, and a reconsideration of related laws. It further stated that a comprehensive effort would need to “include a system of surveillance, assessment, early warning, and response mechanisms to mitigate the potential for cyberthreats.” The report also urged the FBI to continue its efforts to develop warning and threat analysis capabilities, which would enable it to serve as the preliminary national warning center for infrastructure attacks and to provide law enforcement, intelligence, and other information needed to ensure the highest quality analysis possible. In 1998, the President issued Presidential Decision Directive 63 (PDD 63), which described a strategy for cooperative efforts by government and the private sector to protect the physical and cyber-based systems essential to the minimum operations of the economy and the government. PDD 63 called for a range of actions intended to improve federal agency security programs, improve the nation’s ability to detect and respond to serious computer-based and physical attacks, and establish a partnership between the government and the private sector. The directive called on the federal government to serve as a model of how infrastructure assurance is best achieved and designated lead agencies to work with private-sector and government organizations. Further, it established CIP as a national goal and stated that, by the close of 2000, the United States was to have achieved an initial operating capability to protect the nation’s critical infrastructures from intentional destructive acts and, by 2003, have developed the ability to protect the nation’s critical infrastructures from intentional destructive attacks. To accomplish its goals, PDD 63 established and designated organizations to provide central coordination and support, including the Critical Infrastructure Assurance Office (CIAO), an interagency office housed in the Department of Commerce, which was established to develop a national plan for CIP on the basis of infrastructure plans developed by the private sector and federal agencies; the National Infrastructure Protection Center (NIPC), an organization within the FBI, which was expanded to address national-level threat assessment, warning, vulnerability, and law enforcement investigation/response; and the National Infrastructure Assurance Council (NIAC), which was established to enhance the partnership of the public and private sectors in protecting our critical infrastructures. To ensure coverage of critical sectors, PDD 63 also identified eight private- sector infrastructures and five special functions. For each of the infrastuctures and functions, the directive designated lead federal agencies, referred to as sector liaisons, to work with their counterparts in the private sector, referred to as sector coordinators. To facilitate private- sector participation, PDD 63 also encouraged the voluntary creation of information sharing and analysis centers (ISACs) to serve as mechanisms for gathering, analyzing, and appropriately sanitizing and disseminating information to and from infrastructure sectors and the federal government through NIPC. Figure 3 displays a high-level overview of the organizations with CIP responsibilities, as outlined by PDD 63. PDD 63 called for a range of activities intended to establish a partnership between the public and private sectors to ensure the security of our nation’s critical infrastructures. The sector liaison and the sector coordinator were to work with each other to address problems related to CIP for their sector. In particular, PDD 63 stated that they were to (1) develop and implement vulnerability awareness and education programs and (2) contribute to a sectoral National Infrastructure Assurance Plan by assessing the vulnerabilities of the sector to cyber or physical attacks; recommending a plan to eliminate significant vulnerabilities; proposing a system for identifying and preventing major attacks; and developing a plan for alerting, containing, and rebuffing an attack in progress and then, in coordination with FEMA as appropriate, rapidly reconstituting minimum essential capabilities in the aftermath of an attack. PDD 63 also required every federal department and agency to be responsible for protecting its own critical infrastructures, including both cyber-based and physical assets. To fulfill this responsibility, PDD 63 called for agencies’ CIOs to be responsible for information assurance, and it required every agency to appoint a chief infrastructure assurance officer to be responsible for the protection of all other aspects of an agency’s critical infrastructure. Further, it required federal agencies to: develop, implement, and periodically update a plan for protecting its determine its minimum essential infrastructure that might be a target of conduct and periodically update vulnerability assessments of its minimum develop a recommended remedial plan based on vulnerability assessments that identifies time lines for implementation, responsibilities, and funding; and analyze intergovernmental dependencies, and mitigate those dependencies. Other PDD 63 requirements for federal agencies are that they provide vulnerability awareness and education to sensitize people regarding the importance of security and to train them in security standards, particularly regarding cybersystems; that they establish a system for responding to a significant infrastructure attack while it is under way, to help isolate and minimize damage; and that they establish a system for rapidly reconstituting minimum required capabilities for varying levels of successful infrastructure attacks. In January 2000, the White House issued its National Plan for Information Systems Protection. The national plan provided a vision and framework for the federal government to prevent, detect, respond to, and protect the nation’s critical cyber-based infrastructure from attack and reduce existing vulnerabilities by complementing and focusing existing federal computer security and information technology requirements. Subsequent versions of the plan were expected to (1) define the roles of industry and of state and local governments working in partnership with the federal government to protect physical and cyber-based infrastructures from deliberate attack and (2) examine the international aspects of CIP. In October 2001, the President issued Executive Order (EO) 13228,establishing the Office of Homeland Security within the Executive Office of the President and the Homeland Security Council. It stated that the Office of Homeland Security was “to develop and coordinate the implementation of a comprehensive national strategy to secure the United States from terrorist threats or attacks.” In addition, EO 13228 stated that, among other things, the Office of Homeland Security was to coordinate efforts to protect critical public and privately owned information systems within the United States from terrorist attacks. Further, it established the Homeland Security Council to advise and assist the President with respect to all aspects of homeland security, to serve as the mechanism for ensuring coordination of homeland security-related activities of executive departments and agencies, and to develop and implement homeland security policies. In October 2001, President Bush signed EO13231, establishing the President’s Critical Infrastructure Protection Board to coordinate cyber- related federal efforts and programs associated with protecting our nation’s critical infrastructures. Executive Order 13231 tasked the board with recommending policies and coordinating programs for protecting CIP-related information systems. The Special Advisor to the President for Cyberspace Security chaired the board. The executive order also established 10 standing committees to support the board’s work on a wide range of critical information. According to EO 13231, the board’s responsibilities were to recommend policies and coordinate programs for protecting information systems for critical infrastructures, including emergency preparedness communications and the physical assets that support such systems. The Special Advisor reported to the Assistant to the President for National Security Affairs and to the Assistant to the President for Homeland Security and coordinated with the Assistant to the President for Economic Policy on issues relating to private-sector systems and economic effects and with the Director of OMB on issues relating to budgets and the security of federal computer systems. Executive Order 13231 emphasized the importance of CIP and the ISACs, but neither order identified additional requirements for agencies to protect their critical infrastructures or suggested additional activities for the ISACs. In July 2002, the President issued the National Strategy for Homeland Security, with strategic objectives to (1) prevent terrorist attacks within the United States, (2) reduce America’s vulnerability to terrorism, and (3) minimize the damage and recovery from attacks that do occur. To ensure coverage of critical infrastructure sectors, this strategy identified 13 industry sectors, expanded from the 8 originally identified in PDD 63, as essential to our national security, national economic security, and/or national public health and safety. Lead federal agencies were identified and directed to work with their counterparts in the private sector to assess sector vulnerabilities and to develop plans to eliminate vulnerabilities. The sectors and their lead agencies are listed in table 2. The Homeland Security Act of 2002 (signed by the President on November 25, 2002) established the Department of Homeland Security (DHS). Regarding CIP, the new department is responsible for, among other things, (1) developing a comprehensive national plan for securing the key resources and critical infrastructure of the United States; (2) recommending measures to protect the key resources and critical infrastructure of the United States in coordination with other federal agencies and in cooperation with state and local government agencies and authorities, the private sector, and other entities; and (3) disseminating, as appropriate, information analyzed by the department both within the department and to other federal agencies, state and local government agencies, and private-sector entities to assist in the deterrence, prevention, preemption of, or response to terrorist attacks. To help accomplish these functions, the act created the Information Analysis and Infrastructure Protection Directorate within the new department and transferred to it the functions, personnel, assets, and liabilities of several existing organizations with CIP responsibilities, including NIPC (other than the Computer Investigations and Operations Section) and the CIAO. The National Strategy for Homeland Security called for the Office of Homeland Security and the President’s Critical Infrastructure Protection Board to complete cyber and physical infrastructure protection plans, which would serve as the baseline for later developing the comprehensive national infrastructure protection plan. Such a plan was subsequently required by the Homeland Security Act of 2002. On February 14, 2003, the President released the National Strategy to Secure Cyberspace and the complementary National Strategy for the Physical Protection of Critical Infrastructures and Key Assets. These two strategies identify priorities, actions, and responsibilities for the federal government, including lead agencies and DHS, as well as for state and local governments and the private sector. The National Strategy to Secure Cyberspace is intended to provide an initial framework for both organizing and prioritizing efforts to protect our nation’s cyberspace. It is also to provide direction to federal departments and agencies that have roles in cyberspace security and to identify steps that state and local governments, private companies and organizations, and individual Americans can take to improve our collective cybersecurity. The strategy reiterates the critical infrastructure sectors and the related lead federal agencies as identified in The National Strategy for Homeland Security. In addition, the strategy identifies DHS as the central coordinator for cyberspace efforts. As such, DHS is responsible for coordinating and working with other federal entities involved in cybersecurity. This strategy is organized according to five national priorities, with major actions and initiatives identified for each: 1. A National Cyberspace Security Response System—Coordinated by DHS, this system is described as a public/private architecture for analyzing and warning, managing incidents of national significance, promoting continuity in government systems and private-sector infrastructures, and increasing information sharing across and between organizations to improve cyberspace security. The system is to include governmental entities and nongovernmental entities, such as private-sector ISACs. Major actions and initiatives identified for cyberspace security response include providing for the development of tactical and strategic analysis of cyber attacks and vulnerability assessments; expanding the Cyber Warning and Information Network to support the role of DHS in coordinating crisis management for cyberspace security; coordinating processes for voluntary public/private participation in the development of national public/private continuity and contingency plans; exercising cybersecurity continuity plans for federal systems; and improving and enhancing public/private information sharing involving cyber attacks, threats, and vulnerabilities. 2. A National Cyberspace Security Threat and Vulnerability Reduction Program—This priority focuses on reducing threats and deterring malicious actors through effective programs to identify and punish them; identifying and remediating those existing vulnerabilities that, if exploited, could create the most damage to critical systems; and developing new systems with less vulnerability and assessing emerging technologies for vulnerabilities. Other major actions and initiatives include creating a process for national vulnerability assessments to better understand the potential consequences of threats and vulnerabilities, securing the mechanisms of the Internet by improving protocols and routing, fostering the use of trusted digital control and supervisory control and data acquisition systems, understanding infrastructure interdependencies and improving the physical security of cybersystems and telecommunications, and prioritizing federal cybersecurity research and development agendas. 3. A National Cyberspace Security Awareness and Training Program—This priority emphasizes promoting a comprehensive national awareness program to empower all Americans—businesses, the general workforce, and the general population—to secure their own parts of cyberspace. Other major actions and initiatives include fostering adequate training and education programs to support the nation’s cybersecurity needs; increasing the efficiency of existing federal cybersecurity training programs; and promoting private-sector support for well-coordinated, widely recognized professional cybersecurity certification. 4. Securing Governments’ Cyberspace—To help protect, improve, and maintain governments’ cybersecurity, major actions and initiatives for this priority include continuously assessing threats and vulnerabilities to federal cyber systems; authenticating and maintaining authorized users of federal cyber systems; securing federal wireless local area networks; improving security in government outsourcing and procurement; and encouraging state and local governments to consider establishing information technology security programs and participating in ISACs with similar governments. 5. National Security and International Cyberspace Security Cooperation—This priority identifies major actions and initiatives to strengthen U.S. national security and international cooperation. These include strengthening cyber-related counterintelligence efforts, improving capabilities for attack attribution and response, improving coordination for responding to cyber attacks within the U.S. national security community, working with industry and through international organizations to facilitate dialogue and partnerships among international public and private sectors focused on protecting information infrastructures, and fostering the establishment of national and international watch-and-warning networks to detect and prevent cyber attacks as they emerge. The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets provides a statement of national policy to remain committed to protecting critical infrastructures and key assets from terrorist attacks. Although the strategy does not explicitly mention PDD 63, it builds on the directive with its sector-based approach that includes the 13 sectors defined in the National Strategy for Homeland Security, identifies federal departments and agencies as sector liaisons, and calls for expanding the capabilities of ISACs. The strategy is based on eight guiding principles, including establishing responsibility and accountability, encouraging and facilitating partnering among all levels of government and between government and industry, and encouraging market solutions wherever possible and government intervention when needed. The strategy also establishes three strategic objectives. The first is to identify and assure the protection of the most critical assets, systems, and functions, in terms of national-level public health and safety, governance, and economic and national security and public confidence. This would include establishing a uniform methodology for determining national-level criticality. The second strategic objective is to assure the protection of infrastructures and assets facing specific, imminent threats; and the third is to pursue collaborative measures and initiatives to assure the protection of other potential targets that may become attractive over time. Under this strategy, DHS will provide overall cross-sector coordination and serve as the primary liaison and facilitator for cooperation among federal agencies, state and local governments, and the private sector. The strategy states that the private sector generally remains the first line of defense for its own facilities and should reassess and adjust their planning, assurance, and investment programs to better accommodate the increased risk presented by deliberate acts of violence. In addition, the Office of Homeland Security will continue to act as the President’s principal policy adviser staff and coordinating body for major interagency policy issues related to homeland security. On February 28, 2003, Executive Order (EO) 13231 was amended in its entirety by Executive Order 13286. Although EO 13286 maintained the same national policy statement regarding the protection against disruption of information systems for critical infrastructures, it dissolved the President’s Critical Infrastructure Protection Board that was to coordinate cyber-related federal efforts and programs associated with protecting our nation’s critical infrastructures, and the board’s chair— the Special Advisor to the President for Cyberspace Security—and related staff, along with the 10 standing committees established to support the board’s work on a wide range of critical information infrastructure efforts. According to EO 13286, the NIAC is to continue to provide the President with advice on the security of information systems for critical infrastructures supporting other sectors of the economy. However, NIAC will provide its advice through the Secretary of Homeland Security. Regarding the functions of the standing committees, an OMB official stated that OMB will continue to oversee the federal information security committee functions. Further, recent media reports state that efforts are underway to ensure the transition of certain other functions to DHS. On March 1, 2003, DHS assumed certain essential information and analysis and infrastructure protection functions and organizations, including NIPC (other than the Computer Investigation and Operations Section) and the CIAO. Currently, according a Department of Homeland Security official, the department is continuing to carry out the activities previously performed by NIPC and the other transferred functions and organizations. Further, the official stated that the department is enhancing those activities as they are integrated within the new department and are developing a business plan. The DHS official stated that the department is continuing previously established efforts to maintain and build relationships with other federal entities, including the FBI and other NIPC partners, and with the private sector. In addition, the department plans to provide staff to work at the proposed Terrorist Threat Integration Center. Although NIPC experienced the loss of certain senior leadership prior to transition to the new department and have identified some staffing needs, the DHS official stated that the department is able to provide the functions previously performed by NIPC. Although the actions taken to date are major steps to more effectively protect our nation’s critical infrastructures, we have made numerous recommendations over the last several years concerning CIP challenges that still need to be addressed. For each of these challenges, improvements have been made and continuing efforts are in progress. However, even greater efforts are needed to address them. These challenges include developing a comprehensive and coordinated national CIP plan, improving information sharing on threats and vulnerabilities, improving analysis and warning capabilities, and ensuring appropriate incentives to encourage entities outside of the federal government to increase their CIP efforts. It is also important that CIP efforts be appropriately integrated with DHS. An underlying issue in the implementation of CIP is that no national plan yet exists that clearly delineates the roles and responsibilities of federal and nonfederal CIP entities, defines interim objectives and milestones, sets timeframes for achieving objectives, and establishes performance measures. Such a clearly defined plan is essential for defining the relationships among all CIP organizations to ensure that the approach is comprehensive and well coordinated. Since 1998, we have reported on the need for such a plan and made numerous related recommendations. In September 1998, we reported that developing a governmentwide strategy that clearly defined and coordinated the roles of federal entities was important to ensure governmentwide cooperation and support for PDD 63. At that time, we recommended that OMB and the Assistant to the President for National Security Affairs ensure such coordination. In January 2000, the President issued Defending America’s Cyberspace: National Plan for Information Systems Protection: Version 1.0: An Invitation to a Dialogue as a first major element of a more comprehensive effort to protect the nation’s information systems and critical assets from future attacks. The plan proposed achieving the twin goals of making the U.S. government a model of information security and developing a public/private partnership to defend our national infrastructures. However, this plan focused largely on federal cyber CIP efforts, saying little about the private-sector role. In September 2001, we reported that agency questions had surfaced regarding specific roles and responsibilities of entities involved in cyber CIP and the timeframes within which CIP objectives were to be met, as well as guidelines for measuring progress. Accordingly, we made several recommendations to supplement those we had made in the past. Specifically, we recommended that the Assistant to the President for National Security Affairs ensure that the federal government’s strategy to address computer-based threats define specific roles and responsibilities of organizations involved in CIP and related information security activities; interim objectives and milestones for achieving CIP goals and a specific action plan for achieving these objectives, including implementing vulnerability assessments and related remedial plans; and performance measures for which entities can be held accountable. In July 2002 we issued a report identifying at least 50 organizations that were involved in national or multinational cyber CIP efforts, including 5 advisory committees, 6 Executive Office of the President organizations, 38 executive branch organizations associated with departments, agencies, or intelligence organizations, and 3 other organizations. Although our review did not cover organizations with national physical CIP responsibilities, the large number of organizations that we did identify as involved in CIP efforts presents a need to clarify how these entities coordinate their activities with each other. Our report also stated that PDD 63 did not specifically address other possible critical sectors and their respective federal agency counterparts. Accordingly, we recommended that the federal government’s strategy also include all relevant sectors and define the key federal agencies’ roles and responsibilities associated with each of these sectors, and define the relationships among the key CIP organizations. In July 2002, the National Strategy for Homeland Security called for interim cyber and physical infrastructure protection plans that DHS would use to build a comprehensive national infrastructure plan. According to the National Strategy for Homeland Security, the national plan is to provide a methodology for identifying and prioritizing critical assets, systems, and functions, and for sharing protection responsibility with state and local government and the private sector. The plan is to establish standards and benchmarks for infrastructure protection and provide a means to measure performance. The strategy also states that DHS is to unify the currently divided responsibilities for cyber and physical security. In November 2002, as mentioned previously, the Homeland Security Act of 2002 created DHS and, among other things, required it to develop a comprehensive national plan. In February 2003, the President issued the interim strategies—The National Strategy to Secure Cyberspace and The National Strategy for the Physical Protection of Critical Infrastructures and Key Assets (hereafter referred to in this testimony as the cyberspace security strategy and the physical protection strategy). Both define strategic objectives for protecting our nation’s critical assets. These strategies identify priorities, actions, and responsibilities for the federal government, including federal lead departments and agencies and DHS, as well as for state and local governments and the private sector. The two do not (1) clearly indicate how the physical and cyber efforts will be coordinated; (2) define the roles, responsibilities, and relationships among the key CIP organizations, including state and local governments and the private sector; (3) indicate time frames or milestones for their overall implementation or for accomplishing specific actions or initiatives; or (4) establish performance measures for which entities can be held responsible. Until a comprehensive and coordinated plan is completed that unifies the responsibilities for cyber and physical infrastructures; identifies roles, responsibilities, and relationships for all CIP efforts; establish time frames or milestones for implementation; and establishes performance measures, our nation risks not having a consistent and appropriate framework to deal with growing threats to its critical infrastructure. Information sharing is a key element in developing comprehensive and practical approaches to defending against cyber attacks, which could threaten the national welfare. Information on threats, vulnerabilities, and incidents experienced by others can help identify trends, better understand the risks faced, and determine what preventive measures should be implemented. However, as we have reported in recent years, establishing the trusted relationships and information-sharing protocols necessary to support such coordination can be difficult. In addition, the private sector has expressed concerns about sharing information with the government and the difficulty of obtaining security clearances. In October 2001, we reported on information sharing practices that could benefit CIP. These practices include establishing trust relationships with a wide variety of federal and nonfederal entities that may be in a position to provide potentially useful information and advice on vulnerabilities and incidents; developing standards and agreements on how shared information will be establishing effective and appropriately secure communications taking steps to ensure that sensitive information is not inappropriately disseminated, which may require statutory changes. A number of activities have been undertaken to build relationships between the federal government and the private sector, such as InfraGard, the Partnership for Critical Infrastructure Security, efforts by the CIAO, and efforts by lead agencies to establish ISACs. For example, the InfraGard Program, which provides the FBI and NIPC with a means of securely sharing information with individual companies, has expanded substantially. By early January 2001, 518 entities were InfraGard members—up from 277 members in October 2000. Members include representatives from private industry, other government agencies, state and local law enforcement, and the academic community. As of February 2003, InfraGard members totaled over 6,700. As stated above, PDD 63 encouraged the voluntary creation of ISACs to serve as the mechanism for gathering, analyzing, and appropriately sanitizing and disseminating information between the private sector and the federal government through NIPC. ISACs are critical since private- sector entities control over 80 percent of our nation’s critical infrastructures. Their activities could improve the security posture of the individual sectors, as well as provide an improved level of communication within and across sectors and all levels of government. While PDD 63 encouraged the creation of ISACs, it left the actual design and functions of the ISACs, along with their relationship with NIPC, to be determined by the private sector in consultation with the federal government. PDD 63 did provide suggested activities which the ISACs could undertake, including: establishing baseline statistics and patterns on the various infrastructures; serving as a clearinghouse for information within and among the various providing a library for historical data for use by the private sector and reporting private-sector incidents to NIPC. In April 2001, we reported that NIPC and other government entities had not developed fully productive information-sharing relationships but that NIPC had undertaken a range of initiatives to foster information sharing relationships with ISACs, as well as with government and international entities. We recommended that NIPC formalize relationships with ISACs and develop a plan to foster a two-way exchange of information between them. In response to our recommendations, NIPC officials told us in July 2002 that an ISAC development and support unit had been created, whose mission was to enhance private-sector cooperation and trust so that it would result in a two-way sharing of information. DHS now reports that there are currently 16 ISACs, including ISACs established for sectors not identified as critical infrastructure sectors. Table 3 lists the current ISACs identified by DHS and the lead agencies. DHS officials stated that they have formal agreements with most of the current ISACs. In spite of progress made in establishing ISACs, additional efforts are needed. All sectors do not have a fully established ISAC, and of those sectors that do, there is mixed participation. The amount of information being shared between the federal government and private-sector organizations also varies. Specifically, the five ISACs we recently reviewed showed different levels of progress in implementing the PDD 63 suggested activities. Specifically, four of the five reported that efforts to establish baseline statistics were still in progress. Also, while all five reported that they serve as the clearinghouse for their own sectors, only three of the five reported that they are also coordinating with other sectors. Only one of the five ISACs reported that it provides a library of incidents and historical data that is available to both the private sector and the federal government, and although three additional ISACs do maintain a library, it is available only to the private sector. The one remaining ISAC reported that they had yet to develop a library but have plans to do so. Finally, four of the five stated that they report incidents to NIPC on a regular basis. Some in the private sector have expressed concerns about voluntarily sharing information with the government. Specifically, concerns have been raised that industry could potentially face antitrust violations for sharing information with other industry partners, have their information subject to the Freedom of Information Act (FOIA), or face potential liability concerns for information shared in good faith. For example, neither the information technology nor the energy or the water ISACs share their libraries with the federal government because of concerns that information could be released under FOIA. And, officials of the energy ISAC stated that they have not reported incidents to NIPC because of FOIA and antitrust concerns. Other obstacles to information sharing, previously mentioned in congressional testimony, include difficulty obtaining security clearances for ISAC personnel and the reluctance to disclose corporate information. In July 2002 congressional testimony, the Director of Information Technology for the North American Electric Reliability Council stated that the owners of critical infrastructures need access to more specific threat information and analysis from the public sector and that this may require either more security clearances or declassifying information. There will be continuing debate as to whether adequate protection is being provided to the private sector as these entities are encouraged to disclose and exchange information on both physical and cyber security problems and solutions that are essential to protecting our nation’s critical infrastructures. The National Strategy for Homeland Security, which outlines 12 major legislative initiatives, includes “enabling critical infrastructure information sharing.” It states that the nation must meet this need by narrowly limiting public disclosure of information relevant to protecting our physical and cyber critical infrastructures in order to facilitate its voluntary submission. It further states that the Attorney General will convene a panel to propose any legal changes necessary to enable sharing of essential homeland security related information between the federal government and the private sector. Actions have already been taken by the Congress and the administration to strengthen information sharing. For example, the USA PATRIOT Act promotes information sharing among federal agencies, and numerous terrorism task forces have been established to coordinate investigations and improve communications among federal and local law enforcement. Moreover, the Homeland Security Act of 2002 includes provisions that restrict federal, state, and local government use and disclosure of critical infrastructure information that has been voluntarily submitted to the DHS. These restrictions include exemption from disclosure under FOIA, a general limitation on use to CIP purposes, and limitations on use in civil actions and by state or local governments. The act also provides penalties for any federal employee who improperly discloses any protected critical infrastructure information. At this time, it is too early to tell what impact the new law will have on the willingness of the private sector to share critical infrastructure information. Information sharing within the government also remains a challenge. In April 2001, we reported that NIPC and other government entities had not developed fully productive information sharing and cooperative relationships. For example, federal agencies had not routinely reported incident information to NIPC, at least in part because guidance provided by the federal Chief Information Officers Council, which is chaired by the Office of Management and Budget, directs agencies to report such information to the General Services Administration’s FedCIRC. Further, NIPC and DOD officials agreed that their information-sharing procedures needed improvement, noting that protocols for reciprocal exchanges of information had not been established. In addition, the expertise of the U.S. Secret Service regarding computer crime had not been integrated into NIPC efforts. The NIPC director stated in July 2002 that the relationship between NIPC and other government entities had significantly improved since our review, and the quarterly meetings with senior government leaders were instrumental in improving information sharing. In addition, in testimony subsequent to our work, officials from the FedCIRC and the U.S. Secret Service discussed the collaborative and cooperative relationships that had since been formed between their agencies and NIPC. The private sector has also expressed its concerns about the value of information being provided by the government. For example, in July 2002 the President for the Partnership for Critical Infrastructure Security stated in congressional testimony that information sharing between the government and private sector needs work, specifically, in the quality and timeliness of cyber security information coming from the government. The cyberspace security strategy reiterates that the federal government encourages the private sector to continue to establish ISACs and to enhance the analytical capabilities of existing ISACs. It states that ISACs will play an increasingly important role in the national cyberspace security response system and the overall missions of homeland security. In addition, the physical protection strategy states that the overall management of information sharing activities among government agencies and between public and private sectors has lacked proper coordination and facilitation. The physical protection strategy also establishes specific initiatives for creating more effective and efficient information sharing, including defining protection-related information sharing requirements and promoting the development and operation of critical sector ISACs, and implementing the statutory authorities and powers of the Homeland Security Act of 2002. Another key CIP challenge is to develop more robust analysis and warning capabilities to identify threats and provide timely warnings, including an effective methodology for strategic analysis and a framework for collecting needed threat and vulnerability information. Such capabilities need to address both cyber and physical threats. NIPC was established in PDD 63 as “a national focal point” for gathering information on threats and facilitating the federal government’s response to computer-based incidents. Specifically, the directive assigned NIPC the responsibility for providing comprehensive analyses on threats, vulnerabilities, and attacks; issuing timely warnings on threats and attacks; facilitating and coordinating the government’s response to computer- based incidents; providing law enforcement investigation and response, monitoring reconstitution of minimum required capabilities after an infrastructure attack; and promoting outreach and information sharing. This responsibility requires obtaining and analyzing intelligence, law enforcement, and other information to identify patterns that may signal that an attack is under way or imminent. Similar activities are also called for in DHS’s Information Analysis and Infrastructure Protection Directorate, which has absorbed NIPC. In April 2001, we reported on NIPC’s progress in developing national capabilities for analyzing threat and vulnerability data, issuing warnings, and responding to attacks, among other issues. Overall, we found that while progress in developing these capabilities was mixed, NIPC had initiated a variety of CIP efforts that had laid a foundation for future governmentwide efforts. In addition, NIPC had provided valuable support and coordination related to investigating and otherwise responding to attacks on computers. However, at the close of our review, the analytical capabilities that PDD 63 asserted were needed to protect the nation’s critical infrastructures had not yet been achieved, and NIPC had developed only limited warning capabilities. Developing such capabilities is a formidable task that experts say will take an intense interagency effort. At the time of our review, NIPC had issued a variety of analytical products, most of which have been tactical analyses pertaining to individual incidents. In addition, it had issued a variety of publications, most of which were compilations of information previously reported by others with some NIPC analysis. We reported that the use of strategic analysis to determine the potential broader implications of individual incidents had been limited. Such analysis looks beyond one specific incident to consider a broader set of incidents or implications that may indicate a potential threat of national importance. Identifying such threats assists in proactively managing risk, including evaluating the risks associated with possible future incidents and effectively mitigating the impact of such incidents. We also reported that three factors hindered NIPC’s ability to develop strategic analytical capabilities: First, there was no generally accepted methodology for analyzing strategic cyber-based threats. For example, there was no standard terminology, no standard set of factors to consider, and no established thresholds for determining the sophistication of attack techniques. According to officials in the intelligence and national security community, developing such a methodology would require an intense interagency effort and dedication of resources. Second, NIPC had sustained prolonged leadership vacancies and did not have adequate staff expertise, in part because other federal agencies had not provided the originally anticipated number of detailees. For example, at the close of our review in February 2001, the position of Chief of the Analysis and Warning Section, which was to be filled by the Central Intelligence Agency, had been vacant for about half of NIPC’s 3-year existence. In addition, NIPC had been operating with only 13 of the 24 analysts that NIPC officials estimated were needed to develop analytical capabilities. Third, NIPC did not have industry-specific data on factors such as critical system components, known vulnerabilities, and interdependencies. Under PDD 63, such information is to be developed for each of eight industry segments by industry representatives and the designated federal lead agencies. However, at the close of our work, only three industry assessments had been partially completed, and none had been provided to NIPC. In September 2001, we reported that although outreach efforts had raised awareness and improved information sharing, substantive, comprehensive analysis of infrastructure sector interdependencies and vulnerabilities had been limited. To provide a warning capability, NIPC had established a Watch and Warning Unit that monitors the Internet and other media 24 hours a day to identify reports of computer-based attacks. While some warnings were issued in time to avert damage, most of the warnings, especially those related to viruses, pertained to attacks under way. We reported that NIPC’s ability to issue warnings promptly was impeded because of (1) a lack of a comprehensive governmentwide or nationwide framework for promptly obtaining and analyzing information on imminent attacks; (2) a shortage of skilled staff; (3) the need to ensure that NIPC does not raise undue alarm for insignificant incidents; and (4) the need to ensure that sensitive information is protected, especially when such information pertains to law enforcement investigations under way. In addition, NIPC’s own plans for further developing its analysis and warning capabilities were fragmented and incomplete. The relationships between the Center, the FBI, and the National Coordinator for Security, Infrastructure Protection, and Counter-Terrorism at the National Security Council were unclear regarding who had direct authority for setting NIPC priorities and procedures and providing NIPC oversight. As a result, no specific priorities, milestones, or program performance measures existed to guide NIPC’s actions or provide a basis for evaluating its progress. In our report, we recognized that the administration was reviewing the government’s infrastructure protection strategy and recommended that, as the administration proceeds, the Assistant to the President for National Security Affairs, in coordination with pertinent executive agencies, establish a capability for strategically analyzing computer-based threats, including developing related methodology, acquiring staff expertise, and obtaining infrastructure data; require the development of a comprehensive data collection and analysis framework and ensure that national watch and warning operations for computer-based attacks are supported by sufficient staff and resources; and clearly define the role of NIPC in relation to other government and private- sector entities. In July 2002, NIPC’s director stated that, in response to our report’s recommendations, NIPC had developed a plan with goals and objectives to improve its analysis and warning capabilities and had made considerable progress in this area. The plan establishes and describes performance measures both for its analysis and warning section and for other issues relating to staffing, training, investigations, outreach, and warning. In addition, the plan describes the resources needed to reach the specific goals and objectives for the analysis and warning section. The director also stated that the analysis and warning section had created two additional teams to bolster its analytical capabilities: (1) the critical infrastructure assessment team to focus efforts on learning about particular infrastructures and coordinating with respective infrastructure efforts and (2) the collection operations intelligence liaison team to coordinate with various entities within the intelligence community. The director added that NIPC (1) started holding a quarterly meeting with senior government leaders of entities that it regularly works with to better coordinate its analysis and warning capabilities; (2) had developed close working relationships with other CIP entities involved in analysis and warning activities, such as FedCIRC, DOD’s Joint Task Force for Computer Network Operations, Carnegie Mellon’s CERT Coordination Center, and the intelligence and anti-virus communities; and (3) had developed and implemented procedures to more quickly share relevant CIP information, while separately continuing any related law enforcement investigation. The director also stated in July 2002 that NIPC had received sustained leadership commitment from key entities, such as the CIA and the National Security Agency, and that it continued to increase its staff primarily through reservists and contractors. However, the director acknowledged that our recommendations were not fully implemented and that despite the accomplishments to date, much more had to be done to create the robust analysis and warning capabilities needed to adequately address cyberthreats. Another challenge confronting the analysis and warning capabilities of our nation is that, historically, our national CIP attention and efforts have been focused on cyber threats. In April 2001, we reported that while PDD 63 covers both physical and computer-based threats, federal efforts to meet the directive’s requirements have pertained primarily to computer-based threats, since this was an area that the leaders of the administration’s CIP strategy view as needing attention. In July 2002, NIPC reported that the potential for concurrent cyber and physical attacks, referred to as “swarming attacks,” is an emerging threat to the U.S. critical infrastructure. In July 2002, the director of NIPC told us that NIPC had begun to develop some capabilities for identifying physical CIP threats. For example, NIPC had developed thresholds with several ISACs for reporting physical incidents and, since January 2002, has issued several information bulletins concerning physical CIP threats. However, NIPC’s director acknowledged that fully developing this capability will be a significant challenge. The physical protection strategy states that DHS will maintain a comprehensive, up to date assessment of vulnerabilities across sectors and improve processes for domestic threat data collection, analysis, and dissemination to state and local government and private industry. Another critical issue in developing effective analysis and warning capabilities is to ensure that appropriate intelligence and other threat information, both cyber and physical, is received from the intelligence and law enforcement communities. For example, there has been considerable public debate regarding the quality and timeliness of intelligence data shared between and among relevant intelligence, law enforcement, and other agencies. Also, as the transfer of NIPC to DHS organizationally separated NIPC from the FBI’s law enforcement activities, including the Counterterrorism Division and NIPC field agents, it will be critical to establish mechanisms for continued communication to occur. Further, it will be important that the relationships between the law enforcement and intelligence communities and the new DHS are effective and that appropriate information is exchanged on a timely basis. In January 2003, the President announced the creation of a multi-agency Terrorist Threat Integration Center (TTIC) to merge and analyze terrorist- related information collected domestically and abroad in order to form the most comprehensive possible threat picture. The center will be formed from elements of the Department of Homeland Security, the FBI’s Counterterrorism Division, the Director of Central Intelligence’s Counterterrorist Center, and the Department of Defense. Specifically, the President stated that it would: optimize the use of terrorist threat-related information, expertise, and capabilities to conduct threat analysis and inform collection strategies; create a structure that ensures information sharing across agency lines in a way consistent with our national values of privacy and civil liberties; integrate terrorist-related information collected domestically and abroad in order to form the most comprehensive possible threat picture; and be responsible and accountable for providing terrorist threat assessments for our national leadership. The TTIC is scheduled to begin operations within the CIA’s facilities on May 1, 2003, but will eventually move to a new, independent facility. The center is to receive $50 million in fiscal year 2004. The TTIC will fuse international threat-related information from the CIA with domestic threat- related information collected by the FBI’s Joint Terrorism Task Forces and analyzed by a separate FBI information-analysis center. In addition, according to NIPC’s director, as of July 2002, a significant challenge in developing a robust analysis and warning function is the development of the technology and human capital capacities to collect and analyze substantial amounts of information. Similarly, the Director of the FBI testified in June 2002 that implementing a more proactive approach to preventing terrorist acts and denying terrorist groups the ability to operate and raise funds require a centralized and robust analytical capacity that did not exist in the FBI’s Counterterrorism Division. He also stated that processing and exploiting information gathered domestically and abroad during the course of investigations requires an enhanced analytical and data mining capacity that was not then available. Furthermore, NIPC’s director stated that multiagency staffing, similar to NIPC, is a critical success factor in establishing an effective analysis and warning function and that appropriate funding for such staff is important. The National Strategy for Homeland Security identified intelligence and warning as one of six critical mission areas and called for major initiatives to improve our nation’s analysis and warning capabilities. The strategy also stated that no government entity was then responsible for analyzing terrorist threats to the homeland, mapping these threats to our vulnerabilities, and taking protective action. The Homeland Security Act gives such responsibility to the new DHS. Further, the Act gives DHS broad statutory authority to access intelligence information, as well as other information, relevant to the terrorist threat and to turn this information into useful warnings. For example, according to a White House fact sheet, DHS’s Information Analysis and Infrastructure Protection Directorate is to receive and analyze terrorism-related information from the TTIC. An important aspect of improving our nation’s analysis and warning capabilities is having comprehensive vulnerability assessments. The President’s National Strategy for Homeland Security also stated that comprehensive vulnerability assessments of all of our nation’s critical infrastructures are important from a planning perspective in that they enable authorities to evaluate the potential effects of an attack on a given sector and then invest accordingly to protect it. The strategy stated that the U.S. government does not perform vulnerability assessments of the nation’s entire critical infrastructure. The Homeland Security Act of 2002 stated DHS’s Under Secretary for Information Analysis and Infrastructure Protection is to carry out comprehensive assessments of the vulnerabilities of key resources and critical infrastructures of the United States. The President’s fiscal year 2004 budget request for the new DHS includes $829 million for information analysis and infrastructure protection, a significant increase from the estimated $177 million for fiscal year 2003. In particular, the requested funding for protection includes about $500 million to identify key critical infrastructure vulnerabilities and support the necessary steps to ensure that security is improved at these sites. Although it also includes almost $300 million for warning advisories, threat assessments, a communications system, and outreach efforts to state and local governments and the private sector, additional incentives may still be needed to encourage nonfederal entities to increase their CIP efforts. PDD 63 also stated that sector liaisons should identify and assess economic incentives to encourage the desired sector behavior in CIP. Further, to facilitate private-sector participation, it encouraged the voluntary creation of information sharing and analysis centers (ISACs) that could serve as mechanisms for gathering, analyzing, and appropriately sanitizing and disseminating information to and from infrastructure sectors and the federal government through NIPC. Consistent with the original intent of PDD 63, the National Strategy for Homeland Security states that, in many cases, sufficient incentives exist in the private market for addressing the problems of CIP. However, the strategy also discusses the need to use policy tools to protect the health, safety, or well-being of the American people. It mentions federal grants programs to assist state and local efforts, legislation to create incentives for the private sector, and, in some cases, regulation. The physical security strategy reiterates that additional regulatory directives and mandates should only be necessary in instances where the market forces are insufficient to prompt the necessary investments to protect critical infrastructures and key assets. The cyberspace security strategy also states that the market is to provide the major impetus to improve cyber security and that regulation will not become a primary means of securing cyberspace. Last year, the Comptroller General testified on the need for strong partnerships with those outside the federal government and that the new department would need to design and manage tools of public policy to engage and work constructively with third parties. We have previously testified on the choice and design of public policy tools that are available to governments. These public policy tools include grants, regulations, tax incentives, and regional coordination and partnerships to motivate and mandate other levels of government or the private sector to address security concerns. Some of these tools are already being used. For example, as the lead agency for the water sector, the EPA reported providing approximately 449 grants totaling $51 million to assist large drinking water utilities in developing vulnerability assessments, emergency response/operating plans, security enhancement plans and designs, or a combination of these efforts. In a different approach, the American Chemistry Council, the ISAC for the chemical sector, requires as a condition for membership that its members perform enhanced security activities, including vulnerability assessments. However, because a significant percentage of companies that operate major hazardous chemical facilities do not perform these voluntary security activities, the physical security strategy recognized that mandatory measures may be required. The strategy stated that EPA, in consultation with DHS and other federal, state, and local agencies, will review current laws and regulations pertaining to the sale and distribution of highly toxic substances to determine whether additional measures are necessary. Moreover, the strategy also stated that DHS, in concert with EPA, will work with Congress to enact legislation requiring certain facilities, particularly those that maintain large quantities of hazardous chemicals in close proximity to large populations, to enhance site security. Without appropriate consideration of public policy tools, private sector participation in sector-related CIP efforts may not reach its full potential. For example, we reported in January 2003 on the efforts of the financial services sector to address cyber threats, including industry efforts to share information and to better foster and facilitate sectorwide efforts. We also reported on the efforts of federal entities and regulators to partner with the financial services industry to protect critical infrastructures and to address information security. We found that although federal entities had a number of efforts ongoing, Treasury, in its role as sector liaison, had not undertaken a comprehensive assessment of the potential public policy tools to encourage the financial services sector in implementing CIP- related efforts. Because of the importance of considering public policy tools to encourage private sector participation, we recommended that Treasury assess the need for public policy tools to assist the industry in meeting the sector’s goals. In addition, in February 2003, we reported on the mixed progress five ISACs had made in accomplishing the activities suggested by PDD 63. We recommended that the responsible lead agencies assess of the need for public policy tools to encourage increased private- sector CIP activities and greater sharing of intelligence and incident information between the sectors and the federal government.
Protecting the computer systems that support federal agencies' operations and our nation's critical infrastructures--such as power distribution, telecommunications, water supply, and national defense--is a continuing concern. These concerns are well-founded for a number of reasons, including the dramatic increases in reported computer security incidents, the ease of obtaining and using hacking tools, the steady advance in the sophistication and effectiveness of attack technology, and the dire warnings of new and more destructive attacks. GAO first designated computer security as high risk in 1997, and in 2003 expanded this high-risk area to include protecting the systems that support our nation's critical infrastructures, referred to as cyber critical infrastructure protection or cyber CIP. GAO has made previous recommendations and periodically testified on federal information security weaknesses--including agencies' progress in implementing key legislative provisions on information security--and the challenges that the nation faces in protecting our nation's critical infrastructures. GAO was asked to provide an update on the status of federal information security and CIP. With the enactment of the Federal Information Security Management Act of 2002, the Congress continued its efforts to improve federal information security by permanently authorizing and strengthening key information security requirements. The administration has also made progress through a number of efforts, among them the Office of Management and Budget's emphasis of information security in the budget process. However, significant information security weaknesses at 24 major agencies continue to place a broad array of federal operations and assets at risk of fraud, misuse, and disruption. Although recent reporting by these agencies showed some improvements, GAO found that agencies still have not established information security programs consistent with the legal requirements. For example, periodic testing of security controls is essential to security program management, but for fiscal year 2002, 14 agencies reported they had tested the controls of less than 60 percent of their systems. Further information security improvement efforts are also needed at the governmentwide level, and these efforts need to be guided by a comprehensive strategy in which roles and responsibilities are clearly delineated, appropriate guidance is given, adequate technical expertise is obtained, and sufficient agency information security resources are allocated. Although improvements have been made in protecting our nation's critical infrastructures and continuing efforts are in progress, further efforts are needed to address critical challenges that GAO has identified over the last several years. These challenges include: (1) developing a comprehensive and coordinated national CIP plan; (2) improving information sharing on threats and vulnerabilities between the private sector and the federal government, as well as within the government itself; (3) improving analysis and warning capabilities for both cyber and physical threats; and (4) encouraging entities outside the federal government to increase their CIP efforts.
Long-term simulations can be useful for comparing potential outcomes of alternative policies within a common economic framework. Given the broad range of uncertainty about future economic changes, however, any simulations should not be interpreted as forecasts of the level of economic activity 30 years in the future. Instead, simulation results provide illustrations of the budget or economic outcomes associated with alternative policy paths. In our most recent work, we used a long-term economic growth model to simulate three of the many possible fiscal paths through the year 2025: a “no action” path that assumed the continuation of fiscal policies in effect at the end of fiscal year 1994; a “muddling through” path that assumed annual deficits of approximately 3 percent of gross domestic product (GDP); and a path that reaches balance in 2002 and sustains it. To suggest some of the trade-offs facing policymakers in choosing among fiscal policies, we examined some long-term economic and fiscal outcomes of these paths. We also simulated how some types of early action on the deficit, including early action on health care spending, might affect the long-term deficit outlook. Finally, we examined the prospects for sustaining balance over the long term. While we discuss the consequences of alternative fiscal paths, we do not suggest any particular course of action, since only the Congress can resolve the fundamental policy question of choosing the fiscal policy path and the composition of federal activity. In our simulations we employed a model originally developed by economists at the Federal Reserve Bank of New York (FRBNY) that relates long-term GDP growth to economic and budget factors. Details of that model and its assumptions can be found in our reports. As we noted in 1992 and 1995, important and compelling benefits can be gained from shifting to a new fiscal policy path. As illustrated in figure 1, chronic deficits have consumed an increasing share of a declining national savings pool, leaving that much less for private investment. Lower investment will ultimately show up in lower economic growth. Future generations of taxpayers will pay a steep price for this lower economic growth in terms of lower personal incomes and a generally lower standard of living at a time when they will face the burden of supporting an unprecedented number of retirees as the baby boom generation reaches retirement. The problem has been that the damage done by deficits is long-term, gradual, and cumulative in nature and may not be as visible as the short-term costs involved in reducing deficits. This has presented, and continues to present, a difficult challenge for public leaders who must mount a compelling case for deficit reduction—and for the steps required to achieve it—that can capture public support. The updated simulations we presented to you and Chairman Domenici last spring confirmed that the nation’s current fiscal policy path is unsustainable over the longer term. Specifically, a fiscal policy of “no action” on the deficit through 2025 implies federal spending of nearly 44 percent of GDP, and as figure 2 shows, a deficit over 23 percent of GDP. Let me explain these ominous trends. The increased spending is principally a function of escalating federal spending on health care and Social Security, which is driven by projected rising health care costs and the aging of our population. Spending on interest on our national debt also rises as annual deficits and accumulated public debt expand. Essentially, current commitments in these areas become progressively unaffordable for the nation over time. Without any significant changes in spending or revenues, such an expanding deficit would result in collapsing investment, declining capital stock, and, inevitably, a declining economy by 2025. As emphasized in both our 1992 and 1995 reports, we do not believe that such a scenario would take place. Rather, we believe that the prospect of economic decline would prompt action before the end of our simulation period. Nevertheless, this “no action” scenario, by illustrating the future logic of existing commitments, powerfully makes the case that we have no choice but to take action on the deficit. The questions that remain are when and how. Our 1995 simulations also confirm the long-term economic and fiscal benefits of deficit reduction. We assessed the long-term impacts of balancing the budget by 2002, as was contemplated in the fiscal year 1996 budget resolution and in the recent executive-congressional discussions over budget policy, and of sustaining such a posture through 2025. We also estimated the effects of following a path that we called “muddling through”—that is, running deficit of about 3 percent of GDP over the next 30 years. Although current policy is better than this in the near term, it is still a useful illustration. A fiscal policy of balance would yield a stronger economy in the long term than either a policy of no action or of muddling through. Table 1 shows that a budget balance reached in 2002 and sustained until 2025 would, over time, lead to increased investment, increased capital stock, a larger economy, and a much lower national debt than either of the other scenarios. This means that Americans could enjoy a higher standard of living than they might otherwise experience. Reaching and sustaining balance would also shrink the share of federal spending required to pay interest costs, thereby reducing the long-term programmatic sacrifice necessary to attain deficit reduction targets. Even “muddling through” with deficits of 3 percent of GDP would exact a price through higher interest costs and thus require progressively harder fiscal choices as time progresses. Under the balance path, debt per capita would decline from $13,500 in 1994 to $4,800 in 1995 dollars by 2025; debt as a percentage of the economy would drop from about 52 percent to 13 percent. Because of this shrinkage in the debt, by 2025 a balance path could bring interest costs down from about 12 percent in 1994 to less than 5 percent of our budget, compared to about 18 percent under “muddling through” and almost a third of our budget with no action. These differences are illustrated in figure 3. Alarming as these model results may appear, they are probably understated. Our model incorporates conservative assumptions about the relationship between savings, investment and GDP growth that tend to understate the differences between the economic outcomes associated with alternative fiscal politicize. Furthermore, budget projections for the near term and those assumed in our long-term model results may not tell the whole story. By convention, baseline budget projections do not include all the legitimate claims that may be made on the budget in the future. Rather, budget projections ignore many future claims and the costs of unmet needs unless they are the subject of policy proposals in the budget. Examples of such claims and needs would include the cost of cleaning up and restructuring the Department of Energy’s (DOE) nuclear weapons production complex, the cost of hazardous waste pollution clean-up at military facilities, and cost overruns in weapons systems. In short, most of the risks to future budgets seem to be on the side of worse-than-expected, rather than better-than-expected outcomes. I make this observation not to create despair but to underline the need to continue efforts at deficit reduction. Not all spending cuts have the same impact over the long run. Decisions about how to reduce the deficit will reflect—among other considerations—judgments about the role of the federal government and the effectiveness of individual programs. I would like to call attention today to two significant considerations in deficit reduction: (1) the importance of federal investment in infrastructure, human capital, and research and development (R&D), and (2) the importance of addressing the fast-growing programs in the budget. In our 1992 work, we drew particular attention to the importance of well-chosen federal investment in infrastructure, human capital, and R&D. A higher level of national savings is essential to the achievement of a higher rate of economic growth but, by itself, is not sufficient to assure that result. Certain other ingredients are necessary—including the basic stability with which this nation has been blessed in its social, political, and economic environment. In addition, however, economic growth depends on an efficient public infrastructure, an educated work force, an expanding base of knowledge, and a continuing infusion of innovations. In the past, the federal government, through its investments in these areas, has played an important role in providing an environment conducive to growth. Thus the composition of federal spending, as well as overall fiscal policy, can affect long-term economic growth in significant ways. The extent to which deficit reduction affects spending on fast-growing programs also matters. Although a dollar is a dollar in the first year it is cut—regardless of what programmatic changes it represents—cutbacks in the base of fast-growing programs generate greater savings in the future than those in slower-growing programs, assuming the specific cuts are not offset by increases in the growth rates of the programs. Figure 4 illustrates this point by comparing the long-run effects of a $50 billion cut in health spending with those of the same dollar amount cut from unspecified other programs. For both paths the cut occurs in 1996 and is assumed to be permanent but, after 1996, spending is assumed to continue at the same rates of growth as those shown in the “no action” simulation. We used the simple assumption that a reduction in either health or other programs would not alter the expected growth rates simply to illustrate the point that a cut in high-growth areas of spending will have a greater fiscal effect in the future than the same size cut in low-growth areas. A $50 billion cut in health spending in 1996 leads to a deficit in 2025 that is about 4 percent of GDP lower than would be the case from a $50 billion cut in a low-growth program. Further, our simulations show that even if a balanced budget is achieved early in the next century, deficits would reappear if we fail to contain future growth in health, interest, and social security costs. We conclude from these simulations that how and when deficit reduction occurs can have important long-term implications for the future economy and future budgets. As noted earlier, the benefits of deficit reduction in the long run may not seem as compelling as the short-term costs necessary to reduce the deficit. Nevertheless our work on the deficit reduction experiences in other nations shows that significant fiscal improvement is indeed possible in modern democracies, at least for a time. To reach fiscal balance or surplus, the governments we studied instituted often painful measures while generating and maintaining political support. Spending control proved the dominant policy tool used to achieve fiscal goals, although few programs were actually eliminated. Notably, however, several countries restrained social benefit commitments in their quest for savings. Government leaders sought to gain support or at least defuse potential opposition by bringing key interest groups that would be affected into the decision-making process. In addition, the design of the specific deficit-reducing strategies helped. Approaches such as reducing benefits instead of eliminating programs, targeting benefit cuts to higher-income beneficiaries, and deferring or shifting painful adjustments all helped maintain political support for spending reductions. The deficit reduction brought about in these governments provided significant fiscal benefits by slowing or reversing the growth of public debt, thereby slowing or reversing the growth of government interest costs. As we simulated in our long-term growth model, what was once a “vicious” circle of rising deficits, debt, and interest, which can in turn increase deficits, became a “virtuous” circle of falling deficits or rising surpluses, accrued even though most governments we studied did not sustain fiscal balance or surplus, possibly in part because public support for austerity was frequently linked to relatively short-run concerns. Despite this return to deficit, the increases in savings and investment resulting from deficit reduction may have boosted economic prospects for the long-term future, as well as provided fiscal benefits in the short run. Although the experiences of the nations in GAO’s study suggest that resolving deficits is possible in advanced democracies, they also indicate that sustaining fiscal discipline over the longer term is difficult. Thus, deficit reduction strategies designed to promote long-term fiscal progress may help ensure that future budgets are better positioned to withstand future economic and political pressures. For the United States, reaching budgetary balance in 2002 would indeed represent an achievement that by itself would bring about fiscal and economic benefits. Yet this achievement will not eliminate the need for future fiscal discipline. In fact, the needs of an aging society will be more easily met if fiscal balance—or even surplus—is both achieved and sustained for several years. In conclusion, Mr. Chairman, I would repeat our view that current policy is unsustainable. The question, therefore, is not whether to reduce the deficit but when and how. We believe those choices matter. Mr. Chairman, this concludes my written statement. I would be happy to answer any questions you or your colleagues might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its work on the budget deficit and long-term economic growth. GAO noted that: (1) its long-term simulations show that unless the budget deficit is reduced or eliminated, economic growth, personal incomes, national investment, and the standard of living will be sharply reduced; (2) the nation's present fiscal policy is unsustainable in the long term; (3) reaching and sustaining a balanced budget would reduce federal spending on interest, a fast-growing segment of federal spending; (4) reductions in spending on fast-growing health, social security, and interest costs would be most beneficial and would have the most sustained effects; and (5) foreign governments' deficit reduction efforts have been painful but have provided significant fiscal benefits.